Test Report: Docker_Linux 11610

                    
                      64a41824c53cd396e29af8e40a1e5ab125aa9bf4
                    
                

Test fail (1/266)

Order failed test Duration
286 TestStartStop/group/old-k8s-version/serial/Pause 7.55
x
+
TestStartStop/group/old-k8s-version/serial/Pause (7.55s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:284: (dbg) Run:  out/minikube-linux-amd64 pause -p old-k8s-version-20210609012901-9941 --alsologtostderr -v=1
start_stop_delete_test.go:284: (dbg) Non-zero exit: out/minikube-linux-amd64 pause -p old-k8s-version-20210609012901-9941 --alsologtostderr -v=1: exit status 80 (3.080283228s)

                                                
                                                
-- stdout --
	* Pausing node old-k8s-version-20210609012901-9941 ... 
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0609 01:41:59.186429  373430 out.go:291] Setting OutFile to fd 1 ...
	I0609 01:41:59.186505  373430 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0609 01:41:59.186509  373430 out.go:304] Setting ErrFile to fd 2...
	I0609 01:41:59.186512  373430 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0609 01:41:59.186607  373430 root.go:316] Updating PATH: /home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/bin
	I0609 01:41:59.186755  373430 out.go:298] Setting JSON to false
	I0609 01:41:59.186773  373430 mustload.go:65] Loading cluster: old-k8s-version-20210609012901-9941
	I0609 01:41:59.187362  373430 cli_runner.go:115] Run: docker container inspect old-k8s-version-20210609012901-9941 --format={{.State.Status}}
	I0609 01:41:59.225850  373430 host.go:66] Checking if "old-k8s-version-20210609012901-9941" exists ...
	I0609 01:41:59.226895  373430 pause.go:58] "namespaces" [kube-system kubernetes-dashboard storage-gluster istio-operator]="keys" map[addons:[] all:%!s(bool=false) apiserver-ips:[] apiserver-name:minikubeCA apiserver-names:[] apiserver-port:%!s(int=8443) auto-update-drivers:%!s(bool=true) base-image:gcr.io/k8s-minikube/kicbase:v0.0.23@sha256:baf6d94b2050bcbecd98994e265cf965a4f4768978620ccf5227a6dcb75ade45 bootstrapper:kubeadm cache-images:%!s(bool=true) cancel-scheduled:%!s(bool=false) cni: container-runtime:docker cpus:%!s(int=2) cri-socket: delete-on-failure:%!s(bool=false) disable-driver-mounts:%!s(bool=false) disk-size:20000mb dns-domain:cluster.local dns-proxy:%!s(bool=false) docker-env:[] docker-opt:[] download-only:%!s(bool=false) driver: dry-run:%!s(bool=false) embed-certs:%!s(bool=false) enable-default-cni:%!s(bool=false) extra-config: feature-gates: force:%!s(bool=false) force-systemd:%!s(bool=false) host-dns-resolver:%!s(bool=true) host-only-cidr:192.168.99.1/24 host-only-nic-type:virtio hy
perkit-vpnkit-sock: hyperkit-vsock-ports:[] hyperv-external-adapter: hyperv-use-external-switch:%!s(bool=false) hyperv-virtual-switch: image-mirror-country: image-repository: insecure-registry:[] install-addons:%!s(bool=true) interactive:%!s(bool=true) iso-url:[https://storage.googleapis.com/minikube/iso/minikube-v1.20.0.iso https://github.com/kubernetes/minikube/releases/download/v1.20.0/minikube-v1.20.0.iso https://kubernetes.oss-cn-hangzhou.aliyuncs.com/minikube/iso/minikube-v1.20.0.iso] keep-context:%!s(bool=false) keep-context-active:%!s(bool=false) kubernetes-version: kvm-gpu:%!s(bool=false) kvm-hidden:%!s(bool=false) kvm-network:default kvm-numa-count:%!s(int=1) kvm-qemu-uri:qemu:///system listen-address: memory: mount:%!s(bool=false) mount-string:/home/jenkins:/minikube-host namespace:default nat-nic-type:virtio native-ssh:%!s(bool=true) network: network-plugin: nfs-share:[] nfs-shares-root:/nfsshares no-vtx-check:%!s(bool=false) nodes:%!s(int=1) output:text ports:[] preload:%!s(bool=true) profile:old
-k8s-version-20210609012901-9941 purge:%!s(bool=false) registry-mirror:[] reminderwaitperiodinhours:%!s(int=24) schedule:0s service-cluster-ip-range:10.96.0.0/12 showbootstrapperdeprecationnotification:%!s(bool=true) showdriverdeprecationnotification:%!s(bool=true) ssh-ip-address: ssh-key: ssh-port:%!s(int=22) ssh-user:root trace: user: uuid: vm:%!s(bool=false) vm-driver: wait:[apiserver system_pods] wait-timeout:6m0s wantkubectldownloadmsg:%!s(bool=true) wantnonedriverwarning:%!s(bool=true) wantreporterror:%!s(bool=false) wantreporterrorprompt:%!s(bool=true) wantupdatenotification:%!s(bool=true)]="(MISSING)"
	I0609 01:41:59.229384  373430 out.go:170] * Pausing node old-k8s-version-20210609012901-9941 ... 
	I0609 01:41:59.229409  373430 host.go:66] Checking if "old-k8s-version-20210609012901-9941" exists ...
	I0609 01:41:59.229744  373430 ssh_runner.go:149] Run: systemctl --version
	I0609 01:41:59.229791  373430 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" old-k8s-version-20210609012901-9941
	I0609 01:41:59.267633  373430 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32960 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/machines/old-k8s-version-20210609012901-9941/id_rsa Username:docker}
	I0609 01:41:59.357566  373430 ssh_runner.go:149] Run: sudo systemctl disable kubelet
	I0609 01:41:59.480954  373430 retry.go:31] will retry after 276.165072ms: kubelet disable: sudo systemctl disable kubelet: Process exited with status 1
	stdout:
	
	stderr:
	Synchronizing state of kubelet.service with SysV service script with /lib/systemd/systemd-sysv-install.
	Executing: /lib/systemd/systemd-sysv-install disable kubelet
	update-rc.d: error: kubelet Default-Start contains no runlevels, aborting.
	I0609 01:41:59.757382  373430 ssh_runner.go:149] Run: sudo systemctl disable kubelet
	I0609 01:41:59.875735  373430 retry.go:31] will retry after 540.190908ms: kubelet disable: sudo systemctl disable kubelet: Process exited with status 1
	stdout:
	
	stderr:
	Synchronizing state of kubelet.service with SysV service script with /lib/systemd/systemd-sysv-install.
	Executing: /lib/systemd/systemd-sysv-install disable kubelet
	update-rc.d: error: kubelet Default-Start contains no runlevels, aborting.
	I0609 01:42:00.416476  373430 ssh_runner.go:149] Run: sudo systemctl disable kubelet
	I0609 01:42:00.527096  373430 retry.go:31] will retry after 655.06503ms: kubelet disable: sudo systemctl disable kubelet: Process exited with status 1
	stdout:
	
	stderr:
	Synchronizing state of kubelet.service with SysV service script with /lib/systemd/systemd-sysv-install.
	Executing: /lib/systemd/systemd-sysv-install disable kubelet
	update-rc.d: error: kubelet Default-Start contains no runlevels, aborting.
	I0609 01:42:01.182816  373430 ssh_runner.go:149] Run: sudo systemctl disable kubelet
	I0609 01:42:01.290687  373430 retry.go:31] will retry after 791.196345ms: kubelet disable: sudo systemctl disable kubelet: Process exited with status 1
	stdout:
	
	stderr:
	Synchronizing state of kubelet.service with SysV service script with /lib/systemd/systemd-sysv-install.
	Executing: /lib/systemd/systemd-sysv-install disable kubelet
	update-rc.d: error: kubelet Default-Start contains no runlevels, aborting.
	I0609 01:42:02.082620  373430 ssh_runner.go:149] Run: sudo systemctl disable kubelet
	I0609 01:42:02.191443  373430 out.go:170] 
	W0609 01:42:02.191575  373430 out.go:235] X Exiting due to GUEST_PAUSE: kubelet disable: sudo systemctl disable kubelet: Process exited with status 1
	stdout:
	
	stderr:
	Synchronizing state of kubelet.service with SysV service script with /lib/systemd/systemd-sysv-install.
	Executing: /lib/systemd/systemd-sysv-install disable kubelet
	update-rc.d: error: kubelet Default-Start contains no runlevels, aborting.
	
	X Exiting due to GUEST_PAUSE: kubelet disable: sudo systemctl disable kubelet: Process exited with status 1
	stdout:
	
	stderr:
	Synchronizing state of kubelet.service with SysV service script with /lib/systemd/systemd-sysv-install.
	Executing: /lib/systemd/systemd-sysv-install disable kubelet
	update-rc.d: error: kubelet Default-Start contains no runlevels, aborting.
	
	W0609 01:42:02.191596  373430 out.go:235] * 
	* 
	W0609 01:42:02.202509  373430 out.go:235] ╭──────────────────────────────────────────────────────────────────────────────╮
	╭──────────────────────────────────────────────────────────────────────────────╮
	W0609 01:42:02.202529  373430 out.go:235] │                                                                              │
	│                                                                              │
	W0609 01:42:02.202534  373430 out.go:235] │    * If the above advice does not help, please let us know:                  │
	│    * If the above advice does not help, please let us know:                  │
	W0609 01:42:02.202555  373430 out.go:235] │      https://github.com/kubernetes/minikube/issues/new/choose                │
	│      https://github.com/kubernetes/minikube/issues/new/choose                │
	W0609 01:42:02.202560  373430 out.go:235] │                                                                              │
	│                                                                              │
	W0609 01:42:02.202565  373430 out.go:235] │    * Please attach the following file to the GitHub issue:                   │
	│    * Please attach the following file to the GitHub issue:                   │
	W0609 01:42:02.202571  373430 out.go:235] │    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log    │
	│    * - /tmp/minikube_pause_49fdaea37aad8ebccb761973c21590cc64efe8d9_0.log    │
	W0609 01:42:02.202578  373430 out.go:235] │                                                                              │
	│                                                                              │
	W0609 01:42:02.202583  373430 out.go:235] ╰──────────────────────────────────────────────────────────────────────────────╯
	╰──────────────────────────────────────────────────────────────────────────────╯
	W0609 01:42:02.202587  373430 out.go:235] 
	
	I0609 01:42:02.204165  373430 out.go:170] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:284: out/minikube-linux-amd64 pause -p old-k8s-version-20210609012901-9941 --alsologtostderr -v=1 failed: exit status 80
helpers_test.go:218: -----------------------post-mortem--------------------------------
helpers_test.go:226: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/Pause]: docker inspect <======
helpers_test.go:227: (dbg) Run:  docker inspect old-k8s-version-20210609012901-9941
helpers_test.go:231: (dbg) docker inspect old-k8s-version-20210609012901-9941:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "91dce77935ba4eff85240097e434e182e33003cb23a9a85ae8537d003069c32f",
	        "Created": "2021-06-09T01:32:22.976408213Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 300855,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2021-06-09T01:34:39.439780041Z",
	            "FinishedAt": "2021-06-09T01:34:37.912284168Z"
	        },
	        "Image": "sha256:9fce26cb202ecbcb479d0e9dcc943ed048e5957c0bb68667d9476ebc413ee6d7",
	        "ResolvConfPath": "/var/lib/docker/containers/91dce77935ba4eff85240097e434e182e33003cb23a9a85ae8537d003069c32f/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/91dce77935ba4eff85240097e434e182e33003cb23a9a85ae8537d003069c32f/hostname",
	        "HostsPath": "/var/lib/docker/containers/91dce77935ba4eff85240097e434e182e33003cb23a9a85ae8537d003069c32f/hosts",
	        "LogPath": "/var/lib/docker/containers/91dce77935ba4eff85240097e434e182e33003cb23a9a85ae8537d003069c32f/91dce77935ba4eff85240097e434e182e33003cb23a9a85ae8537d003069c32f-json.log",
	        "Name": "/old-k8s-version-20210609012901-9941",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "old-k8s-version-20210609012901-9941:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "old-k8s-version-20210609012901-9941",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "CapAdd": null,
	            "CapDrop": null,
	            "Capabilities": null,
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 0,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": null,
	            "BlkioDeviceWriteBps": null,
	            "BlkioDeviceReadIOps": null,
	            "BlkioDeviceWriteIOps": null,
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "KernelMemory": 0,
	            "KernelMemoryTCP": 0,
	            "MemoryReservation": 0,
	            "MemorySwap": 0,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/e06a36ecd342e83282e80589cd0b96a25668d1c022258253a30c4abc82951614-init/diff:/var/lib/docker/overlay2/bc56a5d6f9b885d4e990c356e0ccfc01ecbed88f252ebfaa9441de3180832d7f/diff:/var/lib/docker/overlay2/25b993e35a4369dc1c3bb5a1579e6e35329eea51bcbd403abb32859a67061a54/diff:/var/lib/docker/overlay2/1fe8141f79894ceaa71723e3cebb26aaf6eb09b92957f7ef1ad563a53df17477/diff:/var/lib/docker/overlay2/c43074dca065bc9311721e20aecd4b6af65294c44e7d9ff6f84a18717d22f9da/diff:/var/lib/docker/overlay2/1318b2c7f3cf224a7ccebeb69bbc1127489945bbb88c21f3171770868a161187/diff:/var/lib/docker/overlay2/c38fd14f646377d81cc91524a921d99d0518ca09e12d17c45948037013fd9100/diff:/var/lib/docker/overlay2/3860f2d47e6d7da92eb5946fda824e25f4c789d00d7e8daa71d0200aac14b536/diff:/var/lib/docker/overlay2/f55aac0c255ec87a42f4d6bc6e79a51ccac3a1d472b1ef4565f141af1acedb04/diff:/var/lib/docker/overlay2/7a1f3b94ec1a7fec96e3f1c789cb025636706f45db2f63cafd48827296910d1d/diff:/var/lib/docker/overlay2/653b9d
24f60635898ac8c6e1b372c54937a708e1e483d47012bc30c58bba0c8c/diff:/var/lib/docker/overlay2/c1832b167afb6406029f607ff5bfad73774ce698299c2b90633d157123654c52/diff:/var/lib/docker/overlay2/75fc291915e6994891ddc9a151bd4c24056ab74e6c8428ba1aef2b2949bbc56e/diff:/var/lib/docker/overlay2/8187764e5fdd094760f8daef22c41c28995fd009c1c56d956db1bb78266b84b2/diff:/var/lib/docker/overlay2/8257db85fb8192780c9e79b131704c61b85e47f9e5c7152097b1a341d06f5840/diff:/var/lib/docker/overlay2/e7499e6556225f397b775719266146f16285f25036f4cf348b09e2fd3be18982/diff:/var/lib/docker/overlay2/84dea696e080b4925128f5b32c22c548c34a63a9dfafa5cb45a932dded279620/diff:/var/lib/docker/overlay2/0646a50eb26264b2a4349823800615095034ab376268714c37e1193106307a2a/diff:/var/lib/docker/overlay2/873d4336e86132442a84ef0da60e4f8fdf8e4989093c0f2a4279120e10ad4f2c/diff:/var/lib/docker/overlay2/44007c68fc2016e815ed96a5faadd25bfb35c362bf1b0521c430ef2ea3805f42/diff:/var/lib/docker/overlay2/7f832f8cf06c783bc6789b50392d803201e52f6baa4a788b5ce48169c94316eb/diff:/var/lib/d
ocker/overlay2/aa919f3d56d7f8b40e56ee381db724e83ee09c96eb696e67326ae47e81324228/diff:/var/lib/docker/overlay2/c53704cae60bb8bd8b355c2d6fb142c9e105dbfeeece4ba9ee0eb81aaaa83fe9/diff:/var/lib/docker/overlay2/1d80475a809da44174d557238fbb00860567d808a157fc2291ac5fedb6f8b2d2/diff:/var/lib/docker/overlay2/d7e1256a346a88b7ce7e6fe9d6ab1146a2c7705c99fcb974ad10b671573b6b83/diff:/var/lib/docker/overlay2/67dc882ee4f992f5a9dc58b56bf7d7a6e78ffe50ccd6227d33d9e2047b7ff877/diff:/var/lib/docker/overlay2/156a8e643f241fdf84afe135ad766dbedd0c515a725939d012de628eb9dd2013/diff:/var/lib/docker/overlay2/ee244a7deb19ed9dc719af435d92c54624874690ce0999c7d030e2f57ecb9e6a/diff:/var/lib/docker/overlay2/91f8a889599c1faaa7f40cc449793deff620d17e83e88dac22c223f131237b12/diff:/var/lib/docker/overlay2/fa8fc61ecf97cd7f2b96efc9d54ba3d9a5b32dcdbb844f360ee173af8fae43a7/diff:/var/lib/docker/overlay2/908106b57878c9eeda6e0d202eee052dee30050250f2a3e5c7d61739d6548623/diff:/var/lib/docker/overlay2/98083c942683a1ac5defcb4b953ba78bbab830ad8c88c4dd145379ebe55
e20a9/diff:/var/lib/docker/overlay2/980703603c9fd3a987c703f9800e56f69031cc7d19f3c692d95eb0937cbb5fd7/diff:/var/lib/docker/overlay2/bc7be9aeb566f06fe346d144629a571aec3e378e82aedf4d6c3fb065569091b2/diff:/var/lib/docker/overlay2/e61aabb9eb2161801d4795e4a00f41afd54c504a52aeeef70d49d2a4f47fcd99/diff:/var/lib/docker/overlay2/a69e80d9160e6158cf9f37881d60928bf3221341b1fffe8d2855488233278102/diff:/var/lib/docker/overlay2/f76fd1ba3588d22f5228ab597df7a62e20a79217c1712dbc33e20061e12891c6/diff",
	                "MergedDir": "/var/lib/docker/overlay2/e06a36ecd342e83282e80589cd0b96a25668d1c022258253a30c4abc82951614/merged",
	                "UpperDir": "/var/lib/docker/overlay2/e06a36ecd342e83282e80589cd0b96a25668d1c022258253a30c4abc82951614/diff",
	                "WorkDir": "/var/lib/docker/overlay2/e06a36ecd342e83282e80589cd0b96a25668d1c022258253a30c4abc82951614/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "old-k8s-version-20210609012901-9941",
	                "Source": "/var/lib/docker/volumes/old-k8s-version-20210609012901-9941/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "old-k8s-version-20210609012901-9941",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase:v0.0.23@sha256:baf6d94b2050bcbecd98994e265cf965a4f4768978620ccf5227a6dcb75ade45",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "old-k8s-version-20210609012901-9941",
	                "name.minikube.sigs.k8s.io": "old-k8s-version-20210609012901-9941",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "1aaecc7a078c61af85d4e6c7c12ffcbc3226c3c0b6bdcdb83ef76e454d99e1ed",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32960"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32959"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32956"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32958"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32957"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/1aaecc7a078c",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "old-k8s-version-20210609012901-9941": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.67.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "91dce77935ba"
	                    ],
	                    "NetworkID": "3b40e12707af96d7a87ef0baaec85159df278a3dc4bf817ecae3932e0bcfbdd2",
	                    "EndpointID": "c1650ce3840b80594246acc2f9fcfa432a39e6b48bada03c110930f25ecac707",
	                    "Gateway": "192.168.67.1",
	                    "IPAddress": "192.168.67.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:43:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:235: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-20210609012901-9941 -n old-k8s-version-20210609012901-9941
helpers_test.go:240: <<< TestStartStop/group/old-k8s-version/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:241: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/Pause]: minikube logs <======
helpers_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-20210609012901-9941 logs -n 25
helpers_test.go:243: (dbg) Done: out/minikube-linux-amd64 -p old-k8s-version-20210609012901-9941 logs -n 25: (1.486683584s)
helpers_test.go:248: TestStartStop/group/old-k8s-version/serial/Pause logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|------------------------------------------------------------|------------------------------------------------|---------|----------------|-------------------------------|-------------------------------|
	| Command |                            Args                            |                    Profile                     |  User   |    Version     |          Start Time           |           End Time            |
	|---------|------------------------------------------------------------|------------------------------------------------|---------|----------------|-------------------------------|-------------------------------|
	| stop    | -p                                                         | newest-cni-20210609013655-9941                 | jenkins | v1.21.0-beta.0 | Wed, 09 Jun 2021 01:37:37 UTC | Wed, 09 Jun 2021 01:37:48 UTC |
	|         | newest-cni-20210609013655-9941                             |                                                |         |                |                               |                               |
	|         | --alsologtostderr -v=3                                     |                                                |         |                |                               |                               |
	| addons  | enable dashboard -p                                        | newest-cni-20210609013655-9941                 | jenkins | v1.21.0-beta.0 | Wed, 09 Jun 2021 01:37:48 UTC | Wed, 09 Jun 2021 01:37:48 UTC |
	|         | newest-cni-20210609013655-9941                             |                                                |         |                |                               |                               |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4          |                                                |         |                |                               |                               |
	| start   | -p                                                         | embed-certs-20210609012903-9941                | jenkins | v1.21.0-beta.0 | Wed, 09 Jun 2021 01:31:33 UTC | Wed, 09 Jun 2021 01:37:54 UTC |
	|         | embed-certs-20210609012903-9941                            |                                                |         |                |                               |                               |
	|         | --memory=2200 --alsologtostderr                            |                                                |         |                |                               |                               |
	|         | --wait=true --embed-certs                                  |                                                |         |                |                               |                               |
	|         | --driver=docker                                            |                                                |         |                |                               |                               |
	|         | --container-runtime=docker                                 |                                                |         |                |                               |                               |
	|         | --kubernetes-version=v1.20.7                               |                                                |         |                |                               |                               |
	| ssh     | -p                                                         | embed-certs-20210609012903-9941                | jenkins | v1.21.0-beta.0 | Wed, 09 Jun 2021 01:38:05 UTC | Wed, 09 Jun 2021 01:38:05 UTC |
	|         | embed-certs-20210609012903-9941                            |                                                |         |                |                               |                               |
	|         | sudo crictl images -o json                                 |                                                |         |                |                               |                               |
	| pause   | -p                                                         | embed-certs-20210609012903-9941                | jenkins | v1.21.0-beta.0 | Wed, 09 Jun 2021 01:38:05 UTC | Wed, 09 Jun 2021 01:38:06 UTC |
	|         | embed-certs-20210609012903-9941                            |                                                |         |                |                               |                               |
	|         | --alsologtostderr -v=1                                     |                                                |         |                |                               |                               |
	| start   | -p newest-cni-20210609013655-9941 --memory=2200            | newest-cni-20210609013655-9941                 | jenkins | v1.21.0-beta.0 | Wed, 09 Jun 2021 01:37:48 UTC | Wed, 09 Jun 2021 01:38:07 UTC |
	|         | --alsologtostderr --wait=apiserver,system_pods,default_sa  |                                                |         |                |                               |                               |
	|         | --feature-gates ServerSideApply=true --network-plugin=cni  |                                                |         |                |                               |                               |
	|         | --extra-config=kubelet.network-plugin=cni                  |                                                |         |                |                               |                               |
	|         | --extra-config=kubeadm.pod-network-cidr=192.168.111.111/16 |                                                |         |                |                               |                               |
	|         | --driver=docker  --container-runtime=docker                |                                                |         |                |                               |                               |
	|         | --kubernetes-version=v1.22.0-alpha.2                       |                                                |         |                |                               |                               |
	| unpause | -p                                                         | embed-certs-20210609012903-9941                | jenkins | v1.21.0-beta.0 | Wed, 09 Jun 2021 01:38:07 UTC | Wed, 09 Jun 2021 01:38:07 UTC |
	|         | embed-certs-20210609012903-9941                            |                                                |         |                |                               |                               |
	|         | --alsologtostderr -v=1                                     |                                                |         |                |                               |                               |
	| ssh     | -p                                                         | newest-cni-20210609013655-9941                 | jenkins | v1.21.0-beta.0 | Wed, 09 Jun 2021 01:38:07 UTC | Wed, 09 Jun 2021 01:38:08 UTC |
	|         | newest-cni-20210609013655-9941                             |                                                |         |                |                               |                               |
	|         | sudo crictl images -o json                                 |                                                |         |                |                               |                               |
	| pause   | -p                                                         | newest-cni-20210609013655-9941                 | jenkins | v1.21.0-beta.0 | Wed, 09 Jun 2021 01:38:08 UTC | Wed, 09 Jun 2021 01:38:08 UTC |
	|         | newest-cni-20210609013655-9941                             |                                                |         |                |                               |                               |
	|         | --alsologtostderr -v=1                                     |                                                |         |                |                               |                               |
	| unpause | -p                                                         | newest-cni-20210609013655-9941                 | jenkins | v1.21.0-beta.0 | Wed, 09 Jun 2021 01:38:09 UTC | Wed, 09 Jun 2021 01:38:10 UTC |
	|         | newest-cni-20210609013655-9941                             |                                                |         |                |                               |                               |
	|         | --alsologtostderr -v=1                                     |                                                |         |                |                               |                               |
	| delete  | -p                                                         | embed-certs-20210609012903-9941                | jenkins | v1.21.0-beta.0 | Wed, 09 Jun 2021 01:38:08 UTC | Wed, 09 Jun 2021 01:38:11 UTC |
	|         | embed-certs-20210609012903-9941                            |                                                |         |                |                               |                               |
	| delete  | -p                                                         | embed-certs-20210609012903-9941                | jenkins | v1.21.0-beta.0 | Wed, 09 Jun 2021 01:38:11 UTC | Wed, 09 Jun 2021 01:38:12 UTC |
	|         | embed-certs-20210609012903-9941                            |                                                |         |                |                               |                               |
	| delete  | -p                                                         | newest-cni-20210609013655-9941                 | jenkins | v1.21.0-beta.0 | Wed, 09 Jun 2021 01:38:11 UTC | Wed, 09 Jun 2021 01:38:14 UTC |
	|         | newest-cni-20210609013655-9941                             |                                                |         |                |                               |                               |
	| delete  | -p                                                         | newest-cni-20210609013655-9941                 | jenkins | v1.21.0-beta.0 | Wed, 09 Jun 2021 01:38:14 UTC | Wed, 09 Jun 2021 01:38:14 UTC |
	|         | newest-cni-20210609013655-9941                             |                                                |         |                |                               |                               |
	| start   | -p false-20210609012810-9941                               | false-20210609012810-9941                      | jenkins | v1.21.0-beta.0 | Wed, 09 Jun 2021 01:38:14 UTC | Wed, 09 Jun 2021 01:39:52 UTC |
	|         | --memory=2048                                              |                                                |         |                |                               |                               |
	|         | --alsologtostderr                                          |                                                |         |                |                               |                               |
	|         | --wait=true --wait-timeout=5m                              |                                                |         |                |                               |                               |
	|         | --cni=false --driver=docker                                |                                                |         |                |                               |                               |
	|         | --container-runtime=docker                                 |                                                |         |                |                               |                               |
	| ssh     | -p false-20210609012810-9941                               | false-20210609012810-9941                      | jenkins | v1.21.0-beta.0 | Wed, 09 Jun 2021 01:39:52 UTC | Wed, 09 Jun 2021 01:39:52 UTC |
	|         | pgrep -a kubelet                                           |                                                |         |                |                               |                               |
	| delete  | -p false-20210609012810-9941                               | false-20210609012810-9941                      | jenkins | v1.21.0-beta.0 | Wed, 09 Jun 2021 01:40:07 UTC | Wed, 09 Jun 2021 01:40:10 UTC |
	| start   | -p                                                         | default-k8s-different-port-20210609012935-9941 | jenkins | v1.21.0-beta.0 | Wed, 09 Jun 2021 01:32:07 UTC | Wed, 09 Jun 2021 01:40:19 UTC |
	|         | default-k8s-different-port-20210609012935-9941             |                                                |         |                |                               |                               |
	|         | --memory=2200 --alsologtostderr                            |                                                |         |                |                               |                               |
	|         | --wait=true --apiserver-port=8444                          |                                                |         |                |                               |                               |
	|         | --driver=docker  --container-runtime=docker                |                                                |         |                |                               |                               |
	|         | --kubernetes-version=v1.20.7                               |                                                |         |                |                               |                               |
	| ssh     | -p                                                         | default-k8s-different-port-20210609012935-9941 | jenkins | v1.21.0-beta.0 | Wed, 09 Jun 2021 01:40:29 UTC | Wed, 09 Jun 2021 01:40:30 UTC |
	|         | default-k8s-different-port-20210609012935-9941             |                                                |         |                |                               |                               |
	|         | sudo crictl images -o json                                 |                                                |         |                |                               |                               |
	| pause   | -p                                                         | default-k8s-different-port-20210609012935-9941 | jenkins | v1.21.0-beta.0 | Wed, 09 Jun 2021 01:40:30 UTC | Wed, 09 Jun 2021 01:40:30 UTC |
	|         | default-k8s-different-port-20210609012935-9941             |                                                |         |                |                               |                               |
	|         | --alsologtostderr -v=1                                     |                                                |         |                |                               |                               |
	| unpause | -p                                                         | default-k8s-different-port-20210609012935-9941 | jenkins | v1.21.0-beta.0 | Wed, 09 Jun 2021 01:40:31 UTC | Wed, 09 Jun 2021 01:40:32 UTC |
	|         | default-k8s-different-port-20210609012935-9941             |                                                |         |                |                               |                               |
	|         | --alsologtostderr -v=1                                     |                                                |         |                |                               |                               |
	| delete  | -p                                                         | default-k8s-different-port-20210609012935-9941 | jenkins | v1.21.0-beta.0 | Wed, 09 Jun 2021 01:40:32 UTC | Wed, 09 Jun 2021 01:40:36 UTC |
	|         | default-k8s-different-port-20210609012935-9941             |                                                |         |                |                               |                               |
	| delete  | -p                                                         | default-k8s-different-port-20210609012935-9941 | jenkins | v1.21.0-beta.0 | Wed, 09 Jun 2021 01:40:36 UTC | Wed, 09 Jun 2021 01:40:36 UTC |
	|         | default-k8s-different-port-20210609012935-9941             |                                                |         |                |                               |                               |
	| start   | -p                                                         | old-k8s-version-20210609012901-9941            | jenkins | v1.21.0-beta.0 | Wed, 09 Jun 2021 01:34:38 UTC | Wed, 09 Jun 2021 01:41:48 UTC |
	|         | old-k8s-version-20210609012901-9941                        |                                                |         |                |                               |                               |
	|         | --memory=2200 --alsologtostderr                            |                                                |         |                |                               |                               |
	|         | --wait=true --kvm-network=default                          |                                                |         |                |                               |                               |
	|         | --kvm-qemu-uri=qemu:///system                              |                                                |         |                |                               |                               |
	|         | --disable-driver-mounts                                    |                                                |         |                |                               |                               |
	|         | --keep-context=false                                       |                                                |         |                |                               |                               |
	|         | --driver=docker                                            |                                                |         |                |                               |                               |
	|         | --container-runtime=docker                                 |                                                |         |                |                               |                               |
	|         | --kubernetes-version=v1.14.0                               |                                                |         |                |                               |                               |
	| ssh     | -p                                                         | old-k8s-version-20210609012901-9941            | jenkins | v1.21.0-beta.0 | Wed, 09 Jun 2021 01:41:58 UTC | Wed, 09 Jun 2021 01:41:59 UTC |
	|         | old-k8s-version-20210609012901-9941                        |                                                |         |                |                               |                               |
	|         | sudo crictl images -o json                                 |                                                |         |                |                               |                               |
	|---------|------------------------------------------------------------|------------------------------------------------|---------|----------------|-------------------------------|-------------------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2021/06/09 01:40:36
	Running on machine: debian-jenkins-agent-1
	Binary: Built with gc go1.16.4 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0609 01:40:36.631110  352096 out.go:291] Setting OutFile to fd 1 ...
	I0609 01:40:36.631229  352096 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0609 01:40:36.631240  352096 out.go:304] Setting ErrFile to fd 2...
	I0609 01:40:36.631245  352096 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0609 01:40:36.631477  352096 root.go:316] Updating PATH: /home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/bin
	I0609 01:40:36.632033  352096 out.go:298] Setting JSON to false
	I0609 01:40:36.673982  352096 start.go:111] hostinfo: {"hostname":"debian-jenkins-agent-1","uptime":5000,"bootTime":1623197837,"procs":265,"os":"linux","platform":"debian","platformFamily":"debian","platformVersion":"9.13","kernelVersion":"4.9.0-15-amd64","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"c29e0b88-ef83-6765-d2fa-208fdce1af32"}
	I0609 01:40:36.674111  352096 start.go:121] virtualization: kvm guest
	I0609 01:40:36.676163  352096 out.go:170] * [calico-20210609012810-9941] minikube v1.21.0-beta.0 on Debian 9.13 (kvm/amd64)
	I0609 01:40:36.678185  352096 out.go:170]   - KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/kubeconfig
	I0609 01:40:36.679873  352096 out.go:170]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0609 01:40:36.681411  352096 out.go:170]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube
	I0609 01:40:36.683678  352096 out.go:170]   - MINIKUBE_LOCATION=11610
	I0609 01:40:36.685630  352096 driver.go:335] Setting default libvirt URI to qemu:///system
	I0609 01:40:36.743399  352096 docker.go:132] docker version: linux-19.03.15
	I0609 01:40:36.743512  352096 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	I0609 01:40:36.834766  352096 info.go:261] docker info: {ID:LQL6:IQEY:TAE3:NG47:ROZQ:WA5O:XM2B:XDCN:3VXZ:7JF3:4DHA:WN5N Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:133 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:false KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:37 OomKillDisable:true NGoroutines:45 SystemTime:2021-06-09 01:40:36.791625716 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:4.9.0-15-amd64 OperatingSystem:Debian GNU/Linux 9 (stretch) OSType:linux Architecture:x86_64 IndexServerAddr
ess:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33742209024 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:debian-jenkins-agent-1 Labels:[] ExperimentalBuild:false ServerVersion:19.03.15 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:269548fa27e0089a8b8278fc4fc781d7f65a939b Expected:269548fa27e0089a8b8278fc4fc781d7f65a939b} RuncCommit:{ID:ff819c7e9184c13b7c2607fe6c30ae19403a7aff Expected:ff819c7e9184c13b7c2607fe6c30ae19403a7aff} InitCommit:{ID:fec3683 Expected:fec3683} SecurityOptions:[name=seccomp,profile=default] ProductLicense: Warnin
gs:[WARNING: No swap limit support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[] Warnings:<nil>}}
	I0609 01:40:36.834840  352096 docker.go:244] overlay module found
	I0609 01:40:36.837087  352096 out.go:170] * Using the docker driver based on user configuration
	I0609 01:40:36.837110  352096 start.go:279] selected driver: docker
	I0609 01:40:36.837115  352096 start.go:752] validating driver "docker" against <nil>
	I0609 01:40:36.837133  352096 start.go:763] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc:}
	W0609 01:40:36.837178  352096 oci.go:119] Your kernel does not support memory limit capabilities or the cgroup is not mounted.
	W0609 01:40:36.837196  352096 out.go:235] ! Your cgroup does not allow setting memory.
	I0609 01:40:36.838992  352096 out.go:170]   - More information: https://docs.docker.com/engine/install/linux-postinstall/#your-kernel-does-not-support-cgroup-swap-limit-capabilities
	I0609 01:40:36.839863  352096 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	I0609 01:40:36.932062  352096 info.go:261] docker info: {ID:LQL6:IQEY:TAE3:NG47:ROZQ:WA5O:XM2B:XDCN:3VXZ:7JF3:4DHA:WN5N Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:133 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:false KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:37 OomKillDisable:true NGoroutines:45 SystemTime:2021-06-09 01:40:36.890557056 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:4.9.0-15-amd64 OperatingSystem:Debian GNU/Linux 9 (stretch) OSType:linux Architecture:x86_64 IndexServerAddr
ess:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33742209024 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:debian-jenkins-agent-1 Labels:[] ExperimentalBuild:false ServerVersion:19.03.15 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:269548fa27e0089a8b8278fc4fc781d7f65a939b Expected:269548fa27e0089a8b8278fc4fc781d7f65a939b} RuncCommit:{ID:ff819c7e9184c13b7c2607fe6c30ae19403a7aff Expected:ff819c7e9184c13b7c2607fe6c30ae19403a7aff} InitCommit:{ID:fec3683 Expected:fec3683} SecurityOptions:[name=seccomp,profile=default] ProductLicense: Warnin
gs:[WARNING: No swap limit support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[] Warnings:<nil>}}
	I0609 01:40:36.932180  352096 start_flags.go:259] no existing cluster config was found, will generate one from the flags 
	I0609 01:40:36.932334  352096 start_flags.go:656] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0609 01:40:36.932354  352096 cni.go:93] Creating CNI manager for "calico"
	I0609 01:40:36.932360  352096 start_flags.go:268] Found "Calico" CNI - setting NetworkPlugin=cni
	I0609 01:40:36.932385  352096 start_flags.go:273] config:
	{Name:calico-20210609012810-9941 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.23@sha256:baf6d94b2050bcbecd98994e265cf965a4f4768978620ccf5227a6dcb75ade45 Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.7 ClusterName:calico-20210609012810-9941 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: Netw
orkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:5m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false}
	I0609 01:40:36.934649  352096 out.go:170] * Starting control plane node calico-20210609012810-9941 in cluster calico-20210609012810-9941
	I0609 01:40:36.934693  352096 cache.go:115] Beginning downloading kic base image for docker with docker
	I0609 01:40:36.936147  352096 out.go:170] * Pulling base image ...
	I0609 01:40:36.936172  352096 preload.go:110] Checking if preload exists for k8s version v1.20.7 and runtime docker
	I0609 01:40:36.936194  352096 preload.go:125] Found local preload: /home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v11-v1.20.7-docker-overlay2-amd64.tar.lz4
	I0609 01:40:36.936205  352096 cache.go:54] Caching tarball of preloaded images
	I0609 01:40:36.936277  352096 cache.go:134] Downloading gcr.io/k8s-minikube/kicbase:v0.0.23@sha256:baf6d94b2050bcbecd98994e265cf965a4f4768978620ccf5227a6dcb75ade45 to local cache
	I0609 01:40:36.936357  352096 preload.go:166] Found /home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v11-v1.20.7-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0609 01:40:36.936376  352096 cache.go:57] Finished verifying existence of preloaded tar for  v1.20.7 on docker
	I0609 01:40:36.936388  352096 image.go:58] Checking for gcr.io/k8s-minikube/kicbase:v0.0.23@sha256:baf6d94b2050bcbecd98994e265cf965a4f4768978620ccf5227a6dcb75ade45 in local cache directory
	I0609 01:40:36.936410  352096 image.go:61] Found gcr.io/k8s-minikube/kicbase:v0.0.23@sha256:baf6d94b2050bcbecd98994e265cf965a4f4768978620ccf5227a6dcb75ade45 in local cache directory, skipping pull
	I0609 01:40:36.936420  352096 image.go:102] gcr.io/k8s-minikube/kicbase:v0.0.23@sha256:baf6d94b2050bcbecd98994e265cf965a4f4768978620ccf5227a6dcb75ade45 exists in cache, skipping pull
	I0609 01:40:36.936434  352096 cache.go:137] successfully saved gcr.io/k8s-minikube/kicbase:v0.0.23@sha256:baf6d94b2050bcbecd98994e265cf965a4f4768978620ccf5227a6dcb75ade45 as a tarball
	I0609 01:40:36.936440  352096 image.go:74] Checking for gcr.io/k8s-minikube/kicbase:v0.0.23@sha256:baf6d94b2050bcbecd98994e265cf965a4f4768978620ccf5227a6dcb75ade45 in local docker daemon
	I0609 01:40:36.936479  352096 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/profiles/calico-20210609012810-9941/config.json ...
	I0609 01:40:36.936497  352096 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/profiles/calico-20210609012810-9941/config.json: {Name:mk031fde7609ae3e97daec785ed839e7488473bf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0609 01:40:37.048612  352096 image.go:78] Found gcr.io/k8s-minikube/kicbase:v0.0.23@sha256:baf6d94b2050bcbecd98994e265cf965a4f4768978620ccf5227a6dcb75ade45 in local docker daemon, skipping pull
	I0609 01:40:37.048657  352096 cache.go:146] gcr.io/k8s-minikube/kicbase:v0.0.23@sha256:baf6d94b2050bcbecd98994e265cf965a4f4768978620ccf5227a6dcb75ade45 exists in daemon, skipping load
	I0609 01:40:37.048675  352096 cache.go:202] Successfully downloaded all kic artifacts
	I0609 01:40:37.048728  352096 start.go:313] acquiring machines lock for calico-20210609012810-9941: {Name:mkae53a330b20aaf52e1813b8aee573fcaaec970 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0609 01:40:37.048858  352096 start.go:317] acquired machines lock for "calico-20210609012810-9941" in 106.275µs
	I0609 01:40:37.048894  352096 start.go:89] Provisioning new machine with config: &{Name:calico-20210609012810-9941 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.23@sha256:baf6d94b2050bcbecd98994e265cf965a4f4768978620ccf5227a6dcb75ade45 Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.7 ClusterName:calico-20210609012810-9941 Namespace:default APIServerName:minikubeCA APIServerNames:
[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.7 ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:5m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false} &{Name: IP: Port:8443 KubernetesVersion:v1.20.7 ControlPlane:true Worker:true}
	I0609 01:40:37.049004  352096 start.go:126] createHost starting for "" (driver="docker")
	I0609 01:40:34.017726  329232 cli_runner.go:115] Run: docker container inspect auto-20210609012809-9941 --format={{.State.Status}}
	I0609 01:40:37.085772  329232 cli_runner.go:115] Run: docker container inspect auto-20210609012809-9941 --format={{.State.Status}}
	I0609 01:40:35.678351  300573 out.go:170] * Enabled addons: storage-provisioner, default-storageclass, metrics-server, dashboard
	I0609 01:40:35.678380  300573 addons.go:344] enableAddons completed in 2.095265934s
	I0609 01:40:35.865805  300573 pod_ready.go:102] pod "coredns-fb8b8dccf-ctgrx" in "kube-system" namespace has status "Ready":"False"
	I0609 01:40:38.366329  300573 pod_ready.go:102] pod "coredns-fb8b8dccf-ctgrx" in "kube-system" namespace has status "Ready":"False"
	I0609 01:40:35.493169  344705 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.20.7/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0609 01:40:35.992256  344705 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.20.7/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0609 01:40:36.492949  344705 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.20.7/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0609 01:40:36.992808  344705 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.20.7/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0609 01:40:37.492406  344705 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.20.7/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0609 01:40:37.992460  344705 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.20.7/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0609 01:40:38.492814  344705 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.20.7/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0609 01:40:38.993013  344705 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.20.7/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0609 01:40:39.492346  344705 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.20.7/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0609 01:40:39.992376  344705 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.20.7/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0609 01:40:37.051194  352096 out.go:197] * Creating docker container (CPUs=2, Memory=2048MB) ...
	I0609 01:40:37.051469  352096 start.go:160] libmachine.API.Create for "calico-20210609012810-9941" (driver="docker")
	I0609 01:40:37.051513  352096 client.go:168] LocalClient.Create starting
	I0609 01:40:37.051649  352096 main.go:128] libmachine: Reading certificate data from /home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/certs/ca.pem
	I0609 01:40:37.051689  352096 main.go:128] libmachine: Decoding PEM data...
	I0609 01:40:37.051712  352096 main.go:128] libmachine: Parsing certificate...
	I0609 01:40:37.051880  352096 main.go:128] libmachine: Reading certificate data from /home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/certs/cert.pem
	I0609 01:40:37.051910  352096 main.go:128] libmachine: Decoding PEM data...
	I0609 01:40:37.051926  352096 main.go:128] libmachine: Parsing certificate...
	I0609 01:40:37.052424  352096 cli_runner.go:115] Run: docker network inspect calico-20210609012810-9941 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0609 01:40:37.099637  352096 cli_runner.go:162] docker network inspect calico-20210609012810-9941 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0609 01:40:37.099719  352096 network_create.go:255] running [docker network inspect calico-20210609012810-9941] to gather additional debugging logs...
	I0609 01:40:37.099742  352096 cli_runner.go:115] Run: docker network inspect calico-20210609012810-9941
	W0609 01:40:37.138707  352096 cli_runner.go:162] docker network inspect calico-20210609012810-9941 returned with exit code 1
	I0609 01:40:37.138742  352096 network_create.go:258] error running [docker network inspect calico-20210609012810-9941]: docker network inspect calico-20210609012810-9941: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: calico-20210609012810-9941
	I0609 01:40:37.138765  352096 network_create.go:260] output of [docker network inspect calico-20210609012810-9941]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: calico-20210609012810-9941
	
	** /stderr **
	I0609 01:40:37.138809  352096 cli_runner.go:115] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0609 01:40:37.177770  352096 network.go:215] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName:br-3efa0710be1e IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:02:42:6f:c7:95:89}}
	I0609 01:40:37.178451  352096 network.go:263] reserving subnet 192.168.58.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[] amended:true}} dirty:map[192.168.58.0:0xc00072a3b8] misses:0}
	I0609 01:40:37.178494  352096 network.go:210] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0609 01:40:37.178511  352096 network_create.go:106] attempt to create docker network calico-20210609012810-9941 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500 ...
	I0609 01:40:37.178562  352096 cli_runner.go:115] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true calico-20210609012810-9941
	I0609 01:40:37.256968  352096 network_create.go:90] docker network calico-20210609012810-9941 192.168.58.0/24 created
	I0609 01:40:37.257004  352096 kic.go:106] calculated static IP "192.168.58.2" for the "calico-20210609012810-9941" container
	I0609 01:40:37.257070  352096 cli_runner.go:115] Run: docker ps -a --format {{.Names}}
	I0609 01:40:37.300737  352096 cli_runner.go:115] Run: docker volume create calico-20210609012810-9941 --label name.minikube.sigs.k8s.io=calico-20210609012810-9941 --label created_by.minikube.sigs.k8s.io=true
	I0609 01:40:37.340542  352096 oci.go:102] Successfully created a docker volume calico-20210609012810-9941
	I0609 01:40:37.340623  352096 cli_runner.go:115] Run: docker run --rm --name calico-20210609012810-9941-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=calico-20210609012810-9941 --entrypoint /usr/bin/test -v calico-20210609012810-9941:/var gcr.io/k8s-minikube/kicbase:v0.0.23@sha256:baf6d94b2050bcbecd98994e265cf965a4f4768978620ccf5227a6dcb75ade45 -d /var/lib
	I0609 01:40:38.148995  352096 oci.go:106] Successfully prepared a docker volume calico-20210609012810-9941
	W0609 01:40:38.149052  352096 oci.go:135] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W0609 01:40:38.149065  352096 oci.go:119] Your kernel does not support memory limit capabilities or the cgroup is not mounted.
	I0609 01:40:38.149126  352096 preload.go:110] Checking if preload exists for k8s version v1.20.7 and runtime docker
	I0609 01:40:38.149132  352096 cli_runner.go:115] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0609 01:40:38.149158  352096 kic.go:179] Starting extracting preloaded images to volume ...
	I0609 01:40:38.149224  352096 cli_runner.go:115] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v11-v1.20.7-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v calico-20210609012810-9941:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.23@sha256:baf6d94b2050bcbecd98994e265cf965a4f4768978620ccf5227a6dcb75ade45 -I lz4 -xf /preloaded.tar -C /extractDir
	I0609 01:40:38.241538  352096 cli_runner.go:115] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname calico-20210609012810-9941 --name calico-20210609012810-9941 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=calico-20210609012810-9941 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=calico-20210609012810-9941 --network calico-20210609012810-9941 --ip 192.168.58.2 --volume calico-20210609012810-9941:/var --security-opt apparmor=unconfined --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase:v0.0.23@sha256:baf6d94b2050bcbecd98994e265cf965a4f4768978620ccf5227a6dcb75ade45
	I0609 01:40:38.853918  352096 cli_runner.go:115] Run: docker container inspect calico-20210609012810-9941 --format={{.State.Running}}
	I0609 01:40:38.906203  352096 cli_runner.go:115] Run: docker container inspect calico-20210609012810-9941 --format={{.State.Status}}
	I0609 01:40:38.959124  352096 cli_runner.go:115] Run: docker exec calico-20210609012810-9941 stat /var/lib/dpkg/alternatives/iptables
	I0609 01:40:39.108798  352096 oci.go:278] the created container "calico-20210609012810-9941" has a running status.
	I0609 01:40:39.108836  352096 kic.go:210] Creating ssh key for kic: /home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/machines/calico-20210609012810-9941/id_rsa...
	I0609 01:40:39.198235  352096 kic_runner.go:188] docker (temp): /home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/machines/calico-20210609012810-9941/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0609 01:40:39.602006  352096 cli_runner.go:115] Run: docker container inspect calico-20210609012810-9941 --format={{.State.Status}}
	I0609 01:40:39.652085  352096 kic_runner.go:94] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0609 01:40:39.652109  352096 kic_runner.go:115] Args: [docker exec --privileged calico-20210609012810-9941 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0609 01:40:40.132328  329232 cli_runner.go:115] Run: docker container inspect auto-20210609012809-9941 --format={{.State.Status}}
	I0609 01:40:40.865096  300573 pod_ready.go:102] pod "coredns-fb8b8dccf-ctgrx" in "kube-system" namespace has status "Ready":"False"
	I0609 01:40:42.865643  300573 pod_ready.go:102] pod "coredns-fb8b8dccf-ctgrx" in "kube-system" namespace has status "Ready":"False"
	I0609 01:40:41.950654  352096 cli_runner.go:168] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v11-v1.20.7-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v calico-20210609012810-9941:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.23@sha256:baf6d94b2050bcbecd98994e265cf965a4f4768978620ccf5227a6dcb75ade45 -I lz4 -xf /preloaded.tar -C /extractDir: (3.801357977s)
	I0609 01:40:41.950723  352096 kic.go:188] duration metric: took 3.801562 seconds to extract preloaded images to volume
	I0609 01:40:41.950817  352096 cli_runner.go:115] Run: docker container inspect calico-20210609012810-9941 --format={{.State.Status}}
	I0609 01:40:41.990470  352096 machine.go:88] provisioning docker machine ...
	I0609 01:40:41.990506  352096 ubuntu.go:169] provisioning hostname "calico-20210609012810-9941"
	I0609 01:40:41.990596  352096 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20210609012810-9941
	I0609 01:40:42.031665  352096 main.go:128] libmachine: Using SSH client type: native
	I0609 01:40:42.031889  352096 main.go:128] libmachine: &{{{<nil> 0 [] [] []} docker [0x802f80] 0x802f40 <nil>  [] 0s} 127.0.0.1 32985 <nil> <nil>}
	I0609 01:40:42.031912  352096 main.go:128] libmachine: About to run SSH command:
	sudo hostname calico-20210609012810-9941 && echo "calico-20210609012810-9941" | sudo tee /etc/hostname
	I0609 01:40:42.168989  352096 main.go:128] libmachine: SSH cmd err, output: <nil>: calico-20210609012810-9941
	
	I0609 01:40:42.169058  352096 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20210609012810-9941
	I0609 01:40:42.214838  352096 main.go:128] libmachine: Using SSH client type: native
	I0609 01:40:42.214999  352096 main.go:128] libmachine: &{{{<nil> 0 [] [] []} docker [0x802f80] 0x802f40 <nil>  [] 0s} 127.0.0.1 32985 <nil> <nil>}
	I0609 01:40:42.215023  352096 main.go:128] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\scalico-20210609012810-9941' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 calico-20210609012810-9941/g' /etc/hosts;
				else 
					echo '127.0.1.1 calico-20210609012810-9941' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0609 01:40:42.332932  352096 main.go:128] libmachine: SSH cmd err, output: <nil>: 
	I0609 01:40:42.332992  352096 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube CaCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/certs/key.pem ServerCertR
emotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube}
	I0609 01:40:42.333032  352096 ubuntu.go:177] setting up certificates
	I0609 01:40:42.333040  352096 provision.go:83] configureAuth start
	I0609 01:40:42.333091  352096 cli_runner.go:115] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" calico-20210609012810-9941
	I0609 01:40:42.372958  352096 provision.go:137] copyHostCerts
	I0609 01:40:42.373013  352096 exec_runner.go:145] found /home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/ca.pem, removing ...
	I0609 01:40:42.373030  352096 exec_runner.go:190] rm: /home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/ca.pem
	I0609 01:40:42.373084  352096 exec_runner.go:152] cp: /home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/ca.pem (1082 bytes)
	I0609 01:40:42.373174  352096 exec_runner.go:145] found /home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/cert.pem, removing ...
	I0609 01:40:42.373185  352096 exec_runner.go:190] rm: /home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/cert.pem
	I0609 01:40:42.373208  352096 exec_runner.go:152] cp: /home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/cert.pem (1123 bytes)
	I0609 01:40:42.373272  352096 exec_runner.go:145] found /home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/key.pem, removing ...
	I0609 01:40:42.373298  352096 exec_runner.go:190] rm: /home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/key.pem
	I0609 01:40:42.373324  352096 exec_runner.go:152] cp: /home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/key.pem (1679 bytes)
	I0609 01:40:42.373372  352096 provision.go:111] generating server cert: /home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/certs/ca-key.pem org=jenkins.calico-20210609012810-9941 san=[192.168.58.2 127.0.0.1 localhost 127.0.0.1 minikube calico-20210609012810-9941]
	I0609 01:40:42.470940  352096 provision.go:171] copyRemoteCerts
	I0609 01:40:42.470996  352096 ssh_runner.go:149] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0609 01:40:42.471030  352096 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20210609012810-9941
	I0609 01:40:42.516819  352096 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32985 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/machines/calico-20210609012810-9941/id_rsa Username:docker}
	I0609 01:40:42.604293  352096 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0609 01:40:42.620326  352096 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0609 01:40:42.635125  352096 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/machines/server.pem --> /etc/docker/server.pem (1253 bytes)
	I0609 01:40:42.650438  352096 provision.go:86] duration metric: configureAuth took 317.389022ms
	I0609 01:40:42.650459  352096 ubuntu.go:193] setting minikube options for container-runtime
	I0609 01:40:42.650643  352096 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20210609012810-9941
	I0609 01:40:42.690608  352096 main.go:128] libmachine: Using SSH client type: native
	I0609 01:40:42.690768  352096 main.go:128] libmachine: &{{{<nil> 0 [] [] []} docker [0x802f80] 0x802f40 <nil>  [] 0s} 127.0.0.1 32985 <nil> <nil>}
	I0609 01:40:42.690789  352096 main.go:128] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0609 01:40:42.809400  352096 main.go:128] libmachine: SSH cmd err, output: <nil>: overlay
	
	I0609 01:40:42.809436  352096 ubuntu.go:71] root file system type: overlay
	I0609 01:40:42.809629  352096 provision.go:308] Updating docker unit: /lib/systemd/system/docker.service ...
	I0609 01:40:42.809695  352096 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20210609012810-9941
	I0609 01:40:42.849952  352096 main.go:128] libmachine: Using SSH client type: native
	I0609 01:40:42.850124  352096 main.go:128] libmachine: &{{{<nil> 0 [] [] []} docker [0x802f80] 0x802f40 <nil>  [] 0s} 127.0.0.1 32985 <nil> <nil>}
	I0609 01:40:42.850223  352096 main.go:128] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0609 01:40:42.982970  352096 main.go:128] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0609 01:40:42.983065  352096 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20210609012810-9941
	I0609 01:40:43.031885  352096 main.go:128] libmachine: Using SSH client type: native
	I0609 01:40:43.032086  352096 main.go:128] libmachine: &{{{<nil> 0 [] [] []} docker [0x802f80] 0x802f40 <nil>  [] 0s} 127.0.0.1 32985 <nil> <nil>}
	I0609 01:40:43.032118  352096 main.go:128] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0609 01:40:43.625675  352096 main.go:128] libmachine: SSH cmd err, output: <nil>: --- /lib/systemd/system/docker.service	2021-06-02 11:54:50.000000000 +0000
	+++ /lib/systemd/system/docker.service.new	2021-06-09 01:40:42.981589018 +0000
	@@ -1,30 +1,32 @@
	 [Unit]
	 Description=Docker Application Container Engine
	 Documentation=https://docs.docker.com
	+BindsTo=containerd.service
	 After=network-online.target firewalld.service containerd.service
	 Wants=network-online.target
	-Requires=docker.socket containerd.service
	+Requires=docker.socket
	+StartLimitBurst=3
	+StartLimitIntervalSec=60
	 
	 [Service]
	 Type=notify
	-# the default is not to use systemd for cgroups because the delegate issues still
	-# exists and systemd currently does not support the cgroup feature set required
	-# for containers run by docker
	-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	-ExecReload=/bin/kill -s HUP $MAINPID
	-TimeoutSec=0
	-RestartSec=2
	-Restart=always
	-
	-# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229.
	-# Both the old, and new location are accepted by systemd 229 and up, so using the old location
	-# to make them work for either version of systemd.
	-StartLimitBurst=3
	+Restart=on-failure
	 
	-# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230.
	-# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make
	-# this option work for either version of systemd.
	-StartLimitInterval=60s
	+
	+
	+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	+# The base configuration already specifies an 'ExecStart=...' command. The first directive
	+# here is to clear out that command inherited from the base configuration. Without this,
	+# the command from the base configuration and the command specified here are treated as
	+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	+# will catch this invalid input and refuse to start the service with an error like:
	+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	+
	+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	+ExecStart=
	+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	+ExecReload=/bin/kill -s HUP $MAINPID
	 
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	@@ -32,16 +34,16 @@
	 LimitNPROC=infinity
	 LimitCORE=infinity
	 
	-# Comment TasksMax if your systemd version does not support it.
	-# Only systemd 226 and above support this option.
	+# Uncomment TasksMax if your systemd version supports it.
	+# Only systemd 226 and above support this version.
	 TasksMax=infinity
	+TimeoutStartSec=0
	 
	 # set delegate yes so that systemd does not reset the cgroups of docker containers
	 Delegate=yes
	 
	 # kill only the docker process, not all processes in the cgroup
	 KillMode=process
	-OOMScoreAdjust=-500
	 
	 [Install]
	 WantedBy=multi-user.target
	Synchronizing state of docker.service with SysV service script with /lib/systemd/systemd-sysv-install.
	Executing: /lib/systemd/systemd-sysv-install enable docker
	
	I0609 01:40:43.625711  352096 machine.go:91] provisioned docker machine in 1.635218617s
	I0609 01:40:43.625725  352096 client.go:171] LocalClient.Create took 6.574201593s
	I0609 01:40:43.625748  352096 start.go:168] duration metric: libmachine.API.Create for "calico-20210609012810-9941" took 6.574278241s
	I0609 01:40:43.625761  352096 start.go:267] post-start starting for "calico-20210609012810-9941" (driver="docker")
	I0609 01:40:43.625768  352096 start.go:277] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0609 01:40:43.625839  352096 ssh_runner.go:149] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0609 01:40:43.625883  352096 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20210609012810-9941
	I0609 01:40:43.667182  352096 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32985 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/machines/calico-20210609012810-9941/id_rsa Username:docker}
	I0609 01:40:43.752939  352096 ssh_runner.go:149] Run: cat /etc/os-release
	I0609 01:40:43.755722  352096 main.go:128] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0609 01:40:43.755749  352096 main.go:128] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0609 01:40:43.755763  352096 main.go:128] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0609 01:40:43.755771  352096 info.go:137] Remote host: Ubuntu 20.04.2 LTS
	I0609 01:40:43.755788  352096 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/addons for local assets ...
	I0609 01:40:43.755837  352096 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/files for local assets ...
	I0609 01:40:43.755931  352096 start.go:270] post-start completed in 130.162299ms
	I0609 01:40:43.756175  352096 cli_runner.go:115] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" calico-20210609012810-9941
	I0609 01:40:43.794853  352096 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/profiles/calico-20210609012810-9941/config.json ...
	I0609 01:40:43.795091  352096 ssh_runner.go:149] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0609 01:40:43.795138  352096 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20210609012810-9941
	I0609 01:40:43.833691  352096 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32985 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/machines/calico-20210609012810-9941/id_rsa Username:docker}
	I0609 01:40:43.917790  352096 start.go:129] duration metric: createHost completed in 6.868772218s
	I0609 01:40:43.917824  352096 start.go:80] releasing machines lock for "calico-20210609012810-9941", held for 6.868947784s
	I0609 01:40:43.917911  352096 cli_runner.go:115] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" calico-20210609012810-9941
	I0609 01:40:43.958012  352096 ssh_runner.go:149] Run: systemctl --version
	I0609 01:40:43.958067  352096 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20210609012810-9941
	I0609 01:40:43.958087  352096 ssh_runner.go:149] Run: curl -sS -m 2 https://k8s.gcr.io/
	I0609 01:40:43.958148  352096 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20210609012810-9941
	I0609 01:40:43.999990  352096 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32985 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/machines/calico-20210609012810-9941/id_rsa Username:docker}
	I0609 01:40:44.000156  352096 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32985 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/machines/calico-20210609012810-9941/id_rsa Username:docker}
	I0609 01:40:44.105048  352096 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service containerd
	I0609 01:40:44.113782  352096 ssh_runner.go:149] Run: sudo systemctl cat docker.service
	I0609 01:40:44.122327  352096 cruntime.go:225] skipping containerd shutdown because we are bound to it
	I0609 01:40:44.122397  352096 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service crio
	I0609 01:40:44.130910  352096 ssh_runner.go:149] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/dockershim.sock
	image-endpoint: unix:///var/run/dockershim.sock
	" | sudo tee /etc/crictl.yaml"
	I0609 01:40:44.142773  352096 ssh_runner.go:149] Run: sudo systemctl unmask docker.service
	I0609 01:40:44.201078  352096 ssh_runner.go:149] Run: sudo systemctl enable docker.socket
	I0609 01:40:44.256269  352096 ssh_runner.go:149] Run: sudo systemctl cat docker.service
	I0609 01:40:44.264833  352096 ssh_runner.go:149] Run: sudo systemctl daemon-reload
	I0609 01:40:44.317328  352096 ssh_runner.go:149] Run: sudo systemctl start docker
	I0609 01:40:44.325668  352096 ssh_runner.go:149] Run: docker version --format {{.Server.Version}}
	I0609 01:40:40.492907  344705 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.20.7/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0609 01:40:40.992189  344705 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.20.7/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0609 01:40:41.493228  344705 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.20.7/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0609 01:40:41.993005  344705 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.20.7/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0609 01:40:42.492386  344705 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.20.7/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0609 01:40:42.992261  344705 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.20.7/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0609 01:40:43.493058  344705 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.20.7/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0609 01:40:43.993022  344705 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.20.7/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0609 01:40:44.492490  344705 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.20.7/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0609 01:40:44.993036  344705 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.20.7/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0609 01:40:44.373093  352096 out.go:197] * Preparing Kubernetes v1.20.7 on Docker 20.10.7 ...
	I0609 01:40:44.373166  352096 cli_runner.go:115] Run: docker network inspect calico-20210609012810-9941 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0609 01:40:44.410011  352096 ssh_runner.go:149] Run: grep 192.168.58.1	host.minikube.internal$ /etc/hosts
	I0609 01:40:44.413077  352096 ssh_runner.go:149] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.58.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0609 01:40:44.422262  352096 localpath.go:92] copying /home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/client.crt -> /home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/profiles/calico-20210609012810-9941/client.crt
	I0609 01:40:44.422356  352096 localpath.go:117] copying /home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/client.key -> /home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/profiles/calico-20210609012810-9941/client.key
	I0609 01:40:44.422503  352096 preload.go:110] Checking if preload exists for k8s version v1.20.7 and runtime docker
	I0609 01:40:44.422549  352096 ssh_runner.go:149] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0609 01:40:44.461776  352096 docker.go:535] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-proxy:v1.20.7
	k8s.gcr.io/kube-controller-manager:v1.20.7
	k8s.gcr.io/kube-apiserver:v1.20.7
	k8s.gcr.io/kube-scheduler:v1.20.7
	gcr.io/k8s-minikube/storage-provisioner:v5
	kubernetesui/dashboard:v2.1.0
	k8s.gcr.io/etcd:3.4.13-0
	k8s.gcr.io/coredns:1.7.0
	kubernetesui/metrics-scraper:v1.0.4
	k8s.gcr.io/pause:3.2
	
	-- /stdout --
	I0609 01:40:44.461803  352096 docker.go:466] Images already preloaded, skipping extraction
	I0609 01:40:44.461856  352096 ssh_runner.go:149] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0609 01:40:44.498947  352096 docker.go:535] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-proxy:v1.20.7
	k8s.gcr.io/kube-controller-manager:v1.20.7
	k8s.gcr.io/kube-apiserver:v1.20.7
	k8s.gcr.io/kube-scheduler:v1.20.7
	gcr.io/k8s-minikube/storage-provisioner:v5
	kubernetesui/dashboard:v2.1.0
	k8s.gcr.io/etcd:3.4.13-0
	k8s.gcr.io/coredns:1.7.0
	kubernetesui/metrics-scraper:v1.0.4
	k8s.gcr.io/pause:3.2
	
	-- /stdout --
	I0609 01:40:44.498975  352096 cache_images.go:74] Images are preloaded, skipping loading
	I0609 01:40:44.499029  352096 ssh_runner.go:149] Run: docker info --format {{.CgroupDriver}}
	I0609 01:40:44.584207  352096 cni.go:93] Creating CNI manager for "calico"
	I0609 01:40:44.584229  352096 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0609 01:40:44.584247  352096 kubeadm.go:153] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.58.2 APIServerPort:8443 KubernetesVersion:v1.20.7 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:calico-20210609012810-9941 NodeName:calico-20210609012810-9941 DNSDomain:cluster.local CRISocket:/var/run/dockershim.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.58.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NoTaintMaster:true NodeIP:192.168.58.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/mi
nikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[]}
	I0609 01:40:44.584403  352096 kubeadm.go:157] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.58.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/dockershim.sock
	  name: "calico-20210609012810-9941"
	  kubeletExtraArgs:
	    node-ip: 192.168.58.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.58.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.7
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	
	I0609 01:40:44.584487  352096 kubeadm.go:909] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.7/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=docker --hostname-override=calico-20210609012810-9941 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.58.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.7 ClusterName:calico-20210609012810-9941 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico NodeIP: NodePort:8443 NodeName:}
	I0609 01:40:44.584549  352096 ssh_runner.go:149] Run: sudo ls /var/lib/minikube/binaries/v1.20.7
	I0609 01:40:44.591407  352096 binaries.go:44] Found k8s binaries, skipping transfer
	I0609 01:40:44.591476  352096 ssh_runner.go:149] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0609 01:40:44.597626  352096 ssh_runner.go:316] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (373 bytes)
	I0609 01:40:44.609338  352096 ssh_runner.go:316] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0609 01:40:44.620431  352096 ssh_runner.go:316] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (1885 bytes)
	I0609 01:40:44.631725  352096 ssh_runner.go:149] Run: grep 192.168.58.2	control-plane.minikube.internal$ /etc/hosts
	I0609 01:40:44.634357  352096 ssh_runner.go:149] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.58.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0609 01:40:44.642326  352096 certs.go:52] Setting up /home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/profiles/calico-20210609012810-9941 for IP: 192.168.58.2
	I0609 01:40:44.642377  352096 certs.go:179] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/ca.key
	I0609 01:40:44.642394  352096 certs.go:179] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/proxy-client-ca.key
	I0609 01:40:44.642461  352096 certs.go:290] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/profiles/calico-20210609012810-9941/client.key
	I0609 01:40:44.642481  352096 certs.go:294] generating minikube signed cert: /home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/profiles/calico-20210609012810-9941/apiserver.key.cee25041
	I0609 01:40:44.642488  352096 crypto.go:69] Generating cert /home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/profiles/calico-20210609012810-9941/apiserver.crt.cee25041 with IP's: [192.168.58.2 10.96.0.1 127.0.0.1 10.0.0.1]
	I0609 01:40:44.840681  352096 crypto.go:157] Writing cert to /home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/profiles/calico-20210609012810-9941/apiserver.crt.cee25041 ...
	I0609 01:40:44.840717  352096 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/profiles/calico-20210609012810-9941/apiserver.crt.cee25041: {Name:mkfc84e07035095def340a1ef0c06b8c2f56c745 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0609 01:40:44.840897  352096 crypto.go:165] Writing key to /home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/profiles/calico-20210609012810-9941/apiserver.key.cee25041 ...
	I0609 01:40:44.840910  352096 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/profiles/calico-20210609012810-9941/apiserver.key.cee25041: {Name:mk3b1eccc9f0abe0f237561b0ecff13d04e9dd19 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0609 01:40:44.840989  352096 certs.go:305] copying /home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/profiles/calico-20210609012810-9941/apiserver.crt.cee25041 -> /home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/profiles/calico-20210609012810-9941/apiserver.crt
	I0609 01:40:44.841051  352096 certs.go:309] copying /home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/profiles/calico-20210609012810-9941/apiserver.key.cee25041 -> /home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/profiles/calico-20210609012810-9941/apiserver.key
	I0609 01:40:44.841102  352096 certs.go:294] generating aggregator signed cert: /home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/profiles/calico-20210609012810-9941/proxy-client.key
	I0609 01:40:44.841112  352096 crypto.go:69] Generating cert /home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/profiles/calico-20210609012810-9941/proxy-client.crt with IP's: []
	I0609 01:40:44.915955  352096 crypto.go:157] Writing cert to /home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/profiles/calico-20210609012810-9941/proxy-client.crt ...
	I0609 01:40:44.915989  352096 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/profiles/calico-20210609012810-9941/proxy-client.crt: {Name:mkf48058b2fd1c7451a636bd94c7654745c05033 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0609 01:40:44.916188  352096 crypto.go:165] Writing key to /home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/profiles/calico-20210609012810-9941/proxy-client.key ...
	I0609 01:40:44.916206  352096 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/profiles/calico-20210609012810-9941/proxy-client.key: {Name:mke09647dda418d05401ddeb31cf7b4c662417a7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0609 01:40:44.916415  352096 certs.go:369] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/certs/9941.pem (1338 bytes)
	W0609 01:40:44.916467  352096 certs.go:365] ignoring /home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/certs/9941_empty.pem, impossibly tiny 0 bytes
	I0609 01:40:44.916486  352096 certs.go:369] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/certs/ca-key.pem (1675 bytes)
	I0609 01:40:44.916523  352096 certs.go:369] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/certs/ca.pem (1082 bytes)
	I0609 01:40:44.916559  352096 certs.go:369] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/certs/cert.pem (1123 bytes)
	I0609 01:40:44.916590  352096 certs.go:369] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/certs/key.pem (1679 bytes)
	I0609 01:40:44.917800  352096 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/profiles/calico-20210609012810-9941/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0609 01:40:44.937170  352096 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/profiles/calico-20210609012810-9941/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0609 01:40:44.956373  352096 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/profiles/calico-20210609012810-9941/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0609 01:40:44.974933  352096 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/profiles/calico-20210609012810-9941/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0609 01:40:44.991731  352096 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0609 01:40:45.008489  352096 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0609 01:40:45.031606  352096 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0609 01:40:45.047895  352096 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0609 01:40:45.064667  352096 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/certs/9941.pem --> /usr/share/ca-certificates/9941.pem (1338 bytes)
	I0609 01:40:45.080936  352096 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0609 01:40:45.096059  352096 ssh_runner.go:316] scp memory --> /var/lib/minikube/kubeconfig (752 bytes)
	I0609 01:40:45.107015  352096 ssh_runner.go:149] Run: openssl version
	I0609 01:40:45.111407  352096 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0609 01:40:45.119189  352096 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0609 01:40:45.121891  352096 certs.go:410] hashing: -rw-r--r-- 1 root root 1111 Jun  9 00:58 /usr/share/ca-certificates/minikubeCA.pem
	I0609 01:40:45.121925  352096 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0609 01:40:45.126118  352096 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0609 01:40:45.132551  352096 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/9941.pem && ln -fs /usr/share/ca-certificates/9941.pem /etc/ssl/certs/9941.pem"
	I0609 01:40:45.138926  352096 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/9941.pem
	I0609 01:40:45.141619  352096 certs.go:410] hashing: -rw-r--r-- 1 root root 1338 Jun  9 01:04 /usr/share/ca-certificates/9941.pem
	I0609 01:40:45.141657  352096 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/9941.pem
	I0609 01:40:45.145814  352096 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/9941.pem /etc/ssl/certs/51391683.0"
	I0609 01:40:45.152149  352096 kubeadm.go:390] StartCluster: {Name:calico-20210609012810-9941 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.23@sha256:baf6d94b2050bcbecd98994e265cf965a4f4768978620ccf5227a6dcb75ade45 Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.7 ClusterName:calico-20210609012810-9941 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNS
Domain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.58.2 Port:8443 KubernetesVersion:v1.20.7 ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:5m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false}
	I0609 01:40:45.152257  352096 ssh_runner.go:149] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0609 01:40:45.187288  352096 ssh_runner.go:149] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0609 01:40:45.193888  352096 ssh_runner.go:149] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0609 01:40:45.201487  352096 kubeadm.go:220] ignoring SystemVerification for kubeadm because of docker driver
	I0609 01:40:45.201538  352096 ssh_runner.go:149] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0609 01:40:45.207661  352096 kubeadm.go:151] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0609 01:40:45.207713  352096 ssh_runner.go:240] Start: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.20.7:$PATH kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0609 01:40:43.186787  329232 cli_runner.go:115] Run: docker container inspect auto-20210609012809-9941 --format={{.State.Status}}
	I0609 01:40:46.229769  329232 cli_runner.go:115] Run: docker container inspect auto-20210609012809-9941 --format={{.State.Status}}
	I0609 01:40:45.365532  300573 pod_ready.go:102] pod "coredns-fb8b8dccf-ctgrx" in "kube-system" namespace has status "Ready":"False"
	I0609 01:40:45.492939  344705 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.20.7/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0609 01:40:45.992622  344705 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.20.7/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0609 01:40:46.493059  344705 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.20.7/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0609 01:40:46.992661  344705 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.20.7/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0609 01:40:48.750771  344705 ssh_runner.go:189] Completed: sudo /var/lib/minikube/binaries/v1.20.7/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig: (1.758074457s)
	I0609 01:40:48.993021  344705 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.20.7/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0609 01:40:49.269941  329232 cli_runner.go:115] Run: docker container inspect auto-20210609012809-9941 --format={{.State.Status}}
	I0609 01:40:52.311061  329232 cli_runner.go:115] Run: docker container inspect auto-20210609012809-9941 --format={{.State.Status}}
	I0609 01:40:51.493556  344705 ssh_runner.go:189] Completed: sudo /var/lib/minikube/binaries/v1.20.7/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig: (2.500498227s)
	I0609 01:40:51.992230  344705 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.20.7/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0609 01:40:52.180627  344705 kubeadm.go:985] duration metric: took 19.939502771s to wait for elevateKubeSystemPrivileges.
	I0609 01:40:52.180659  344705 kubeadm.go:392] StartCluster complete in 33.745162361s
	I0609 01:40:52.180680  344705 settings.go:142] acquiring lock: {Name:mk8746ecf7d8ca6a3508d1e45e55db2314c0e73d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0609 01:40:52.180766  344705 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/kubeconfig
	I0609 01:40:52.182512  344705 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/kubeconfig: {Name:mk288d2c4fafd90028bf76db1824dfec28d92db3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0609 01:40:52.757936  344705 kapi.go:244] deployment "coredns" in namespace "kube-system" and context "cilium-20210609012810-9941" rescaled to 1
	I0609 01:40:52.758013  344705 start.go:214] Will wait 5m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.20.7 ControlPlane:true Worker:true}
	I0609 01:40:52.759853  344705 out.go:170] * Verifying Kubernetes components...
	I0609 01:40:52.758135  344705 ssh_runner.go:149] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.7/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0609 01:40:52.759935  344705 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service kubelet
	I0609 01:40:52.758167  344705 addons.go:342] enableAddons start: toEnable=map[], additional=[]
	I0609 01:40:52.760010  344705 addons.go:59] Setting storage-provisioner=true in profile "cilium-20210609012810-9941"
	I0609 01:40:52.758404  344705 cache.go:108] acquiring lock: {Name:mk2dd9808d496cd84c38482eea9e354a60be2885 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0609 01:40:52.760030  344705 addons.go:59] Setting default-storageclass=true in profile "cilium-20210609012810-9941"
	I0609 01:40:52.760049  344705 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "cilium-20210609012810-9941"
	I0609 01:40:52.760062  344705 addons.go:135] Setting addon storage-provisioner=true in "cilium-20210609012810-9941"
	W0609 01:40:52.760082  344705 addons.go:147] addon storage-provisioner should already be in state true
	I0609 01:40:52.760090  344705 cache.go:116] /home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/cache/images/minikube-local-cache-test_functional-20210609010438-9941 exists
	I0609 01:40:52.760113  344705 host.go:66] Checking if "cilium-20210609012810-9941" exists ...
	I0609 01:40:52.760111  344705 cache.go:97] cache image "minikube-local-cache-test:functional-20210609010438-9941" -> "/home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/cache/images/minikube-local-cache-test_functional-20210609010438-9941" took 1.718093ms
	I0609 01:40:52.760126  344705 cache.go:81] save to tar file minikube-local-cache-test:functional-20210609010438-9941 -> /home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/cache/images/minikube-local-cache-test_functional-20210609010438-9941 succeeded
	I0609 01:40:52.760140  344705 cache.go:88] Successfully saved all images to host disk.
	I0609 01:40:52.760541  344705 cli_runner.go:115] Run: docker container inspect cilium-20210609012810-9941 --format={{.State.Status}}
	I0609 01:40:52.760709  344705 cli_runner.go:115] Run: docker container inspect cilium-20210609012810-9941 --format={{.State.Status}}
	I0609 01:40:52.761714  344705 cli_runner.go:115] Run: docker container inspect cilium-20210609012810-9941 --format={{.State.Status}}
	I0609 01:40:50.469695  300573 pod_ready.go:92] pod "coredns-fb8b8dccf-ctgrx" in "kube-system" namespace has status "Ready":"True"
	I0609 01:40:50.469731  300573 pod_ready.go:81] duration metric: took 16.612054385s waiting for pod "coredns-fb8b8dccf-ctgrx" in "kube-system" namespace to be "Ready" ...
	I0609 01:40:50.469746  300573 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-97rr9" in "kube-system" namespace to be "Ready" ...
	I0609 01:40:51.488708  300573 pod_ready.go:92] pod "kube-proxy-97rr9" in "kube-system" namespace has status "Ready":"True"
	I0609 01:40:51.488734  300573 pod_ready.go:81] duration metric: took 1.018979544s waiting for pod "kube-proxy-97rr9" in "kube-system" namespace to be "Ready" ...
	I0609 01:40:51.488744  300573 pod_ready.go:38] duration metric: took 17.633659357s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0609 01:40:51.488765  300573 api_server.go:50] waiting for apiserver process to appear ...
	I0609 01:40:51.488807  300573 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0609 01:40:51.520972  300573 api_server.go:70] duration metric: took 17.937884491s to wait for apiserver process to appear ...
	I0609 01:40:51.520999  300573 api_server.go:86] waiting for apiserver healthz status ...
	I0609 01:40:51.521011  300573 api_server.go:223] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I0609 01:40:51.525448  300573 api_server.go:249] https://192.168.67.2:8443/healthz returned 200:
	ok
	I0609 01:40:51.526192  300573 api_server.go:139] control plane version: v1.14.0
	I0609 01:40:51.526211  300573 api_server.go:129] duration metric: took 5.206469ms to wait for apiserver health ...
	I0609 01:40:51.526219  300573 system_pods.go:43] waiting for kube-system pods to appear ...
	I0609 01:40:51.528829  300573 system_pods.go:59] 4 kube-system pods found
	I0609 01:40:51.528851  300573 system_pods.go:61] "coredns-fb8b8dccf-ctgrx" [adffdf7a-c8c3-11eb-a78f-02427f02d9a2] Running
	I0609 01:40:51.528856  300573 system_pods.go:61] "kube-proxy-97rr9" [ae0f0e5a-c8c3-11eb-a78f-02427f02d9a2] Running
	I0609 01:40:51.528865  300573 system_pods.go:61] "metrics-server-8546d8b77b-lqx7b" [afea2287-c8c3-11eb-a78f-02427f02d9a2] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0609 01:40:51.528871  300573 system_pods.go:61] "storage-provisioner" [af32324c-c8c3-11eb-a78f-02427f02d9a2] Running
	I0609 01:40:51.528887  300573 system_pods.go:74] duration metric: took 2.66306ms to wait for pod list to return data ...
	I0609 01:40:51.528896  300573 default_sa.go:34] waiting for default service account to be created ...
	I0609 01:40:51.531122  300573 default_sa.go:45] found service account: "default"
	I0609 01:40:51.531139  300573 default_sa.go:55] duration metric: took 2.23539ms for default service account to be created ...
	I0609 01:40:51.531146  300573 system_pods.go:116] waiting for k8s-apps to be running ...
	I0609 01:40:51.536460  300573 system_pods.go:86] 4 kube-system pods found
	I0609 01:40:51.536487  300573 system_pods.go:89] "coredns-fb8b8dccf-ctgrx" [adffdf7a-c8c3-11eb-a78f-02427f02d9a2] Running
	I0609 01:40:51.536494  300573 system_pods.go:89] "kube-proxy-97rr9" [ae0f0e5a-c8c3-11eb-a78f-02427f02d9a2] Running
	I0609 01:40:51.536504  300573 system_pods.go:89] "metrics-server-8546d8b77b-lqx7b" [afea2287-c8c3-11eb-a78f-02427f02d9a2] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0609 01:40:51.536517  300573 system_pods.go:89] "storage-provisioner" [af32324c-c8c3-11eb-a78f-02427f02d9a2] Running
	I0609 01:40:51.536541  300573 retry.go:31] will retry after 214.282984ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0609 01:40:51.755301  300573 system_pods.go:86] 4 kube-system pods found
	I0609 01:40:51.755331  300573 system_pods.go:89] "coredns-fb8b8dccf-ctgrx" [adffdf7a-c8c3-11eb-a78f-02427f02d9a2] Running
	I0609 01:40:51.755339  300573 system_pods.go:89] "kube-proxy-97rr9" [ae0f0e5a-c8c3-11eb-a78f-02427f02d9a2] Running
	I0609 01:40:51.755348  300573 system_pods.go:89] "metrics-server-8546d8b77b-lqx7b" [afea2287-c8c3-11eb-a78f-02427f02d9a2] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0609 01:40:51.755355  300573 system_pods.go:89] "storage-provisioner" [af32324c-c8c3-11eb-a78f-02427f02d9a2] Running
	I0609 01:40:51.755369  300573 retry.go:31] will retry after 293.852686ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0609 01:40:52.053824  300573 system_pods.go:86] 4 kube-system pods found
	I0609 01:40:52.053857  300573 system_pods.go:89] "coredns-fb8b8dccf-ctgrx" [adffdf7a-c8c3-11eb-a78f-02427f02d9a2] Running
	I0609 01:40:52.053865  300573 system_pods.go:89] "kube-proxy-97rr9" [ae0f0e5a-c8c3-11eb-a78f-02427f02d9a2] Running
	I0609 01:40:52.053880  300573 system_pods.go:89] "metrics-server-8546d8b77b-lqx7b" [afea2287-c8c3-11eb-a78f-02427f02d9a2] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0609 01:40:52.053892  300573 system_pods.go:89] "storage-provisioner" [af32324c-c8c3-11eb-a78f-02427f02d9a2] Running
	I0609 01:40:52.053908  300573 retry.go:31] will retry after 355.089487ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0609 01:40:52.413227  300573 system_pods.go:86] 4 kube-system pods found
	I0609 01:40:52.413262  300573 system_pods.go:89] "coredns-fb8b8dccf-ctgrx" [adffdf7a-c8c3-11eb-a78f-02427f02d9a2] Running
	I0609 01:40:52.413272  300573 system_pods.go:89] "kube-proxy-97rr9" [ae0f0e5a-c8c3-11eb-a78f-02427f02d9a2] Running
	I0609 01:40:52.413282  300573 system_pods.go:89] "metrics-server-8546d8b77b-lqx7b" [afea2287-c8c3-11eb-a78f-02427f02d9a2] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0609 01:40:52.413289  300573 system_pods.go:89] "storage-provisioner" [af32324c-c8c3-11eb-a78f-02427f02d9a2] Running
	I0609 01:40:52.413304  300573 retry.go:31] will retry after 480.685997ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0609 01:40:52.898013  300573 system_pods.go:86] 4 kube-system pods found
	I0609 01:40:52.898051  300573 system_pods.go:89] "coredns-fb8b8dccf-ctgrx" [adffdf7a-c8c3-11eb-a78f-02427f02d9a2] Running
	I0609 01:40:52.898059  300573 system_pods.go:89] "kube-proxy-97rr9" [ae0f0e5a-c8c3-11eb-a78f-02427f02d9a2] Running
	I0609 01:40:52.898071  300573 system_pods.go:89] "metrics-server-8546d8b77b-lqx7b" [afea2287-c8c3-11eb-a78f-02427f02d9a2] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0609 01:40:52.898078  300573 system_pods.go:89] "storage-provisioner" [af32324c-c8c3-11eb-a78f-02427f02d9a2] Running
	I0609 01:40:52.898093  300573 retry.go:31] will retry after 544.138839ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0609 01:40:53.446671  300573 system_pods.go:86] 4 kube-system pods found
	I0609 01:40:53.446706  300573 system_pods.go:89] "coredns-fb8b8dccf-ctgrx" [adffdf7a-c8c3-11eb-a78f-02427f02d9a2] Running
	I0609 01:40:53.446713  300573 system_pods.go:89] "kube-proxy-97rr9" [ae0f0e5a-c8c3-11eb-a78f-02427f02d9a2] Running
	I0609 01:40:53.446722  300573 system_pods.go:89] "metrics-server-8546d8b77b-lqx7b" [afea2287-c8c3-11eb-a78f-02427f02d9a2] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0609 01:40:53.446728  300573 system_pods.go:89] "storage-provisioner" [af32324c-c8c3-11eb-a78f-02427f02d9a2] Running
	I0609 01:40:53.446742  300573 retry.go:31] will retry after 684.014074ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0609 01:40:52.840705  344705 out.go:170]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0609 01:40:52.840860  344705 addons.go:275] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0609 01:40:52.840873  344705 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0609 01:40:52.840938  344705 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-20210609012810-9941
	I0609 01:40:52.820388  344705 ssh_runner.go:149] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0609 01:40:52.841301  344705 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-20210609012810-9941
	I0609 01:40:52.823016  344705 addons.go:135] Setting addon default-storageclass=true in "cilium-20210609012810-9941"
	W0609 01:40:52.841379  344705 addons.go:147] addon default-storageclass should already be in state true
	I0609 01:40:52.841434  344705 host.go:66] Checking if "cilium-20210609012810-9941" exists ...
	I0609 01:40:52.841999  344705 cli_runner.go:115] Run: docker container inspect cilium-20210609012810-9941 --format={{.State.Status}}
	I0609 01:40:52.875619  344705 ssh_runner.go:149] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.7/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' | sudo /var/lib/minikube/binaries/v1.20.7/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0609 01:40:52.878520  344705 node_ready.go:35] waiting up to 5m0s for node "cilium-20210609012810-9941" to be "Ready" ...
	I0609 01:40:52.883106  344705 node_ready.go:49] node "cilium-20210609012810-9941" has status "Ready":"True"
	I0609 01:40:52.883125  344705 node_ready.go:38] duration metric: took 4.566542ms waiting for node "cilium-20210609012810-9941" to be "Ready" ...
	I0609 01:40:52.883135  344705 pod_ready.go:35] extra waiting up to 5m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0609 01:40:52.901282  344705 pod_ready.go:78] waiting up to 5m0s for pod "cilium-2rdhk" in "kube-system" namespace to be "Ready" ...
	I0609 01:40:52.905753  344705 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32980 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/machines/cilium-20210609012810-9941/id_rsa Username:docker}
	I0609 01:40:52.913698  344705 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32980 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/machines/cilium-20210609012810-9941/id_rsa Username:docker}
	I0609 01:40:52.924428  344705 addons.go:275] installing /etc/kubernetes/addons/storageclass.yaml
	I0609 01:40:52.924451  344705 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0609 01:40:52.924507  344705 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-20210609012810-9941
	I0609 01:40:52.985429  344705 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32980 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/machines/cilium-20210609012810-9941/id_rsa Username:docker}
	I0609 01:40:53.093158  344705 ssh_runner.go:149] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.7/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0609 01:40:53.182043  344705 ssh_runner.go:149] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.7/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0609 01:40:53.354533  344705 start.go:725] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS
	I0609 01:40:53.354610  344705 docker.go:535] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-proxy:v1.20.7
	k8s.gcr.io/kube-apiserver:v1.20.7
	k8s.gcr.io/kube-controller-manager:v1.20.7
	k8s.gcr.io/kube-scheduler:v1.20.7
	gcr.io/k8s-minikube/storage-provisioner:v5
	kubernetesui/dashboard:v2.1.0
	k8s.gcr.io/etcd:3.4.13-0
	k8s.gcr.io/coredns:1.7.0
	kubernetesui/metrics-scraper:v1.0.4
	k8s.gcr.io/pause:3.2
	
	-- /stdout --
	I0609 01:40:53.354626  344705 docker.go:541] minikube-local-cache-test:functional-20210609010438-9941 wasn't preloaded
	I0609 01:40:53.354641  344705 cache_images.go:78] LoadImages start: [minikube-local-cache-test:functional-20210609010438-9941]
	I0609 01:40:53.355651  344705 image.go:133] retrieving image: minikube-local-cache-test:functional-20210609010438-9941
	I0609 01:40:53.355676  344705 image.go:139] checking repository: index.docker.io/library/minikube-local-cache-test
	I0609 01:40:53.588602  344705 out.go:170] * Enabled addons: storage-provisioner, default-storageclass
	I0609 01:40:53.588639  344705 addons.go:344] enableAddons completed in 830.486904ms
	W0609 01:40:54.204447  344705 image.go:146] remote: HEAD https://index.docker.io/v2/library/minikube-local-cache-test/manifests/functional-20210609010438-9941: unexpected status code 401 Unauthorized (HEAD responses have no body, use GET for details)
	I0609 01:40:54.204502  344705 image.go:147] short name: minikube-local-cache-test:functional-20210609010438-9941
	I0609 01:40:54.205330  344705 image.go:175] daemon lookup for minikube-local-cache-test:functional-20210609010438-9941: Error response from daemon: reference does not exist
	W0609 01:40:54.817533  344705 image.go:185] authn lookup for minikube-local-cache-test:functional-20210609010438-9941 (trying anon): GET https://index.docker.io/v2/library/minikube-local-cache-test/manifests/functional-20210609010438-9941: UNAUTHORIZED: authentication required; [map[Action:pull Class: Name:library/minikube-local-cache-test Type:repository]]
	I0609 01:40:54.940307  344705 pod_ready.go:102] pod "cilium-2rdhk" in "kube-system" namespace has status "Ready":"False"
	I0609 01:40:55.379843  329232 cli_runner.go:115] Run: docker container inspect auto-20210609012809-9941 --format={{.State.Status}}
	I0609 01:40:54.134198  300573 system_pods.go:86] 4 kube-system pods found
	I0609 01:40:54.134226  300573 system_pods.go:89] "coredns-fb8b8dccf-ctgrx" [adffdf7a-c8c3-11eb-a78f-02427f02d9a2] Running
	I0609 01:40:54.134231  300573 system_pods.go:89] "kube-proxy-97rr9" [ae0f0e5a-c8c3-11eb-a78f-02427f02d9a2] Running
	I0609 01:40:54.134238  300573 system_pods.go:89] "metrics-server-8546d8b77b-lqx7b" [afea2287-c8c3-11eb-a78f-02427f02d9a2] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0609 01:40:54.134242  300573 system_pods.go:89] "storage-provisioner" [af32324c-c8c3-11eb-a78f-02427f02d9a2] Running
	I0609 01:40:54.134254  300573 retry.go:31] will retry after 1.039068543s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0609 01:40:55.178626  300573 system_pods.go:86] 4 kube-system pods found
	I0609 01:40:55.178662  300573 system_pods.go:89] "coredns-fb8b8dccf-ctgrx" [adffdf7a-c8c3-11eb-a78f-02427f02d9a2] Running
	I0609 01:40:55.178669  300573 system_pods.go:89] "kube-proxy-97rr9" [ae0f0e5a-c8c3-11eb-a78f-02427f02d9a2] Running
	I0609 01:40:55.178679  300573 system_pods.go:89] "metrics-server-8546d8b77b-lqx7b" [afea2287-c8c3-11eb-a78f-02427f02d9a2] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0609 01:40:55.178691  300573 system_pods.go:89] "storage-provisioner" [af32324c-c8c3-11eb-a78f-02427f02d9a2] Running
	I0609 01:40:55.178707  300573 retry.go:31] will retry after 1.02433744s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0609 01:40:56.206796  300573 system_pods.go:86] 4 kube-system pods found
	I0609 01:40:56.206822  300573 system_pods.go:89] "coredns-fb8b8dccf-ctgrx" [adffdf7a-c8c3-11eb-a78f-02427f02d9a2] Running
	I0609 01:40:56.206828  300573 system_pods.go:89] "kube-proxy-97rr9" [ae0f0e5a-c8c3-11eb-a78f-02427f02d9a2] Running
	I0609 01:40:56.206835  300573 system_pods.go:89] "metrics-server-8546d8b77b-lqx7b" [afea2287-c8c3-11eb-a78f-02427f02d9a2] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0609 01:40:56.206839  300573 system_pods.go:89] "storage-provisioner" [af32324c-c8c3-11eb-a78f-02427f02d9a2] Running
	I0609 01:40:56.206851  300573 retry.go:31] will retry after 1.268973106s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0609 01:40:57.480720  300573 system_pods.go:86] 4 kube-system pods found
	I0609 01:40:57.480751  300573 system_pods.go:89] "coredns-fb8b8dccf-ctgrx" [adffdf7a-c8c3-11eb-a78f-02427f02d9a2] Running
	I0609 01:40:57.480759  300573 system_pods.go:89] "kube-proxy-97rr9" [ae0f0e5a-c8c3-11eb-a78f-02427f02d9a2] Running
	I0609 01:40:57.480771  300573 system_pods.go:89] "metrics-server-8546d8b77b-lqx7b" [afea2287-c8c3-11eb-a78f-02427f02d9a2] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0609 01:40:57.480778  300573 system_pods.go:89] "storage-provisioner" [af32324c-c8c3-11eb-a78f-02427f02d9a2] Running
	I0609 01:40:57.480796  300573 retry.go:31] will retry after 1.733071555s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0609 01:40:55.410467  344705 image.go:189] remote lookup for minikube-local-cache-test:functional-20210609010438-9941: GET https://index.docker.io/v2/library/minikube-local-cache-test/manifests/functional-20210609010438-9941: UNAUTHORIZED: authentication required; [map[Action:pull Class: Name:library/minikube-local-cache-test Type:repository]]
	I0609 01:40:55.410515  344705 image.go:92] error retrieve Image minikube-local-cache-test:functional-20210609010438-9941 ref GET https://index.docker.io/v2/library/minikube-local-cache-test/manifests/functional-20210609010438-9941: UNAUTHORIZED: authentication required; [map[Action:pull Class: Name:library/minikube-local-cache-test Type:repository]] 
	I0609 01:40:55.410544  344705 cache_images.go:106] "minikube-local-cache-test:functional-20210609010438-9941" needs transfer: got empty img digest "" for minikube-local-cache-test:functional-20210609010438-9941
	I0609 01:40:55.410583  344705 docker.go:236] Removing image: minikube-local-cache-test:functional-20210609010438-9941
	I0609 01:40:55.410638  344705 ssh_runner.go:149] Run: docker rmi minikube-local-cache-test:functional-20210609010438-9941
	I0609 01:40:55.448411  344705 cache_images.go:279] Loading image from: /home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/cache/images/minikube-local-cache-test_functional-20210609010438-9941
	I0609 01:40:55.448506  344705 ssh_runner.go:149] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/minikube-local-cache-test_functional-20210609010438-9941
	I0609 01:40:55.451714  344705 ssh_runner.go:306] existence check for /var/lib/minikube/images/minikube-local-cache-test_functional-20210609010438-9941: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/minikube-local-cache-test_functional-20210609010438-9941: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot stat '/var/lib/minikube/images/minikube-local-cache-test_functional-20210609010438-9941': No such file or directory
	I0609 01:40:55.451745  344705 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/cache/images/minikube-local-cache-test_functional-20210609010438-9941 --> /var/lib/minikube/images/minikube-local-cache-test_functional-20210609010438-9941 (5120 bytes)
	I0609 01:40:55.471575  344705 docker.go:203] Loading image: /var/lib/minikube/images/minikube-local-cache-test_functional-20210609010438-9941
	I0609 01:40:55.471628  344705 ssh_runner.go:149] Run: docker load -i /var/lib/minikube/images/minikube-local-cache-test_functional-20210609010438-9941
	I0609 01:40:55.762458  344705 cache_images.go:308] Transferred and loaded /home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/cache/images/minikube-local-cache-test_functional-20210609010438-9941 from cache
	I0609 01:40:55.762495  344705 cache_images.go:113] Successfully loaded all cached images
	I0609 01:40:55.762502  344705 cache_images.go:82] LoadImages completed in 2.407848633s
	I0609 01:40:55.762517  344705 cache_images.go:252] succeeded pushing to: cilium-20210609012810-9941
	I0609 01:40:55.762522  344705 cache_images.go:253] failed pushing to: 
	I0609 01:40:57.446509  344705 pod_ready.go:102] pod "cilium-2rdhk" in "kube-system" namespace has status "Ready":"False"
	I0609 01:40:59.919287  344705 pod_ready.go:102] pod "cilium-2rdhk" in "kube-system" namespace has status "Ready":"False"
	I0609 01:41:00.317663  352096 out.go:197]   - Generating certificates and keys ...
	I0609 01:41:00.320816  352096 out.go:197]   - Booting up control plane ...
	I0609 01:41:00.323612  352096 out.go:197]   - Configuring RBAC rules ...
	I0609 01:41:00.325728  352096 cni.go:93] Creating CNI manager for "calico"
	I0609 01:41:00.327397  352096 out.go:170] * Configuring Calico (Container Networking Interface) ...
	I0609 01:41:00.327463  352096 cni.go:187] applying CNI manifest using /var/lib/minikube/binaries/v1.20.7/kubectl ...
	I0609 01:41:00.327482  352096 ssh_runner.go:316] scp memory --> /var/tmp/minikube/cni.yaml (22544 bytes)
	I0609 01:41:00.355615  352096 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.20.7/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0609 01:41:01.345873  352096 ssh_runner.go:149] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0609 01:41:01.346015  352096 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.20.7/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0609 01:41:01.346096  352096 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.20.7/kubectl label nodes minikube.k8s.io/version=v1.21.0-beta.0 minikube.k8s.io/commit=d4601e4184ba947b7d077d7d426c2bca79bbf9fc minikube.k8s.io/name=calico-20210609012810-9941 minikube.k8s.io/updated_at=2021_06_09T01_41_01_0700 --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0609 01:40:58.423166  329232 cli_runner.go:115] Run: docker container inspect auto-20210609012809-9941 --format={{.State.Status}}
	I0609 01:41:01.474794  329232 cli_runner.go:115] Run: docker container inspect auto-20210609012809-9941 --format={{.State.Status}}
	I0609 01:40:59.218044  300573 system_pods.go:86] 4 kube-system pods found
	I0609 01:40:59.218071  300573 system_pods.go:89] "coredns-fb8b8dccf-ctgrx" [adffdf7a-c8c3-11eb-a78f-02427f02d9a2] Running
	I0609 01:40:59.218077  300573 system_pods.go:89] "kube-proxy-97rr9" [ae0f0e5a-c8c3-11eb-a78f-02427f02d9a2] Running
	I0609 01:40:59.218084  300573 system_pods.go:89] "metrics-server-8546d8b77b-lqx7b" [afea2287-c8c3-11eb-a78f-02427f02d9a2] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0609 01:40:59.218089  300573 system_pods.go:89] "storage-provisioner" [af32324c-c8c3-11eb-a78f-02427f02d9a2] Running
	I0609 01:40:59.218101  300573 retry.go:31] will retry after 2.410580953s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0609 01:41:01.632429  300573 system_pods.go:86] 4 kube-system pods found
	I0609 01:41:01.632456  300573 system_pods.go:89] "coredns-fb8b8dccf-ctgrx" [adffdf7a-c8c3-11eb-a78f-02427f02d9a2] Running
	I0609 01:41:01.632462  300573 system_pods.go:89] "kube-proxy-97rr9" [ae0f0e5a-c8c3-11eb-a78f-02427f02d9a2] Running
	I0609 01:41:01.632469  300573 system_pods.go:89] "metrics-server-8546d8b77b-lqx7b" [afea2287-c8c3-11eb-a78f-02427f02d9a2] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0609 01:41:01.632476  300573 system_pods.go:89] "storage-provisioner" [af32324c-c8c3-11eb-a78f-02427f02d9a2] Running
	I0609 01:41:01.632489  300573 retry.go:31] will retry after 3.437877504s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0609 01:41:02.460409  344705 pod_ready.go:102] pod "cilium-2rdhk" in "kube-system" namespace has status "Ready":"False"
	I0609 01:41:04.920306  344705 pod_ready.go:102] pod "cilium-2rdhk" in "kube-system" namespace has status "Ready":"False"
	I0609 01:41:01.767984  352096 ops.go:34] apiserver oom_adj: -16
	I0609 01:41:01.768084  352096 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.20.7/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0609 01:41:02.480180  352096 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.20.7/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0609 01:41:02.980220  352096 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.20.7/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0609 01:41:03.480904  352096 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.20.7/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0609 01:41:03.980208  352096 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.20.7/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0609 01:41:04.480690  352096 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.20.7/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0609 01:41:04.980710  352096 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.20.7/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0609 01:41:05.480647  352096 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.20.7/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0609 01:41:05.979985  352096 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.20.7/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0609 01:41:06.480212  352096 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.20.7/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0609 01:41:04.521744  329232 cli_runner.go:115] Run: docker container inspect auto-20210609012809-9941 --format={{.State.Status}}
	I0609 01:41:05.073834  300573 system_pods.go:86] 4 kube-system pods found
	I0609 01:41:05.073863  300573 system_pods.go:89] "coredns-fb8b8dccf-ctgrx" [adffdf7a-c8c3-11eb-a78f-02427f02d9a2] Running
	I0609 01:41:05.073868  300573 system_pods.go:89] "kube-proxy-97rr9" [ae0f0e5a-c8c3-11eb-a78f-02427f02d9a2] Running
	I0609 01:41:05.073876  300573 system_pods.go:89] "metrics-server-8546d8b77b-lqx7b" [afea2287-c8c3-11eb-a78f-02427f02d9a2] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0609 01:41:05.073881  300573 system_pods.go:89] "storage-provisioner" [af32324c-c8c3-11eb-a78f-02427f02d9a2] Running
	I0609 01:41:05.073895  300573 retry.go:31] will retry after 3.261655801s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0609 01:41:08.339005  300573 system_pods.go:86] 4 kube-system pods found
	I0609 01:41:08.339042  300573 system_pods.go:89] "coredns-fb8b8dccf-ctgrx" [adffdf7a-c8c3-11eb-a78f-02427f02d9a2] Running
	I0609 01:41:08.339049  300573 system_pods.go:89] "kube-proxy-97rr9" [ae0f0e5a-c8c3-11eb-a78f-02427f02d9a2] Running
	I0609 01:41:08.339061  300573 system_pods.go:89] "metrics-server-8546d8b77b-lqx7b" [afea2287-c8c3-11eb-a78f-02427f02d9a2] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0609 01:41:08.339067  300573 system_pods.go:89] "storage-provisioner" [af32324c-c8c3-11eb-a78f-02427f02d9a2] Running
	I0609 01:41:08.339081  300573 retry.go:31] will retry after 4.086092664s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0609 01:41:07.419175  344705 pod_ready.go:102] pod "cilium-2rdhk" in "kube-system" namespace has status "Ready":"False"
	I0609 01:41:09.443670  344705 pod_ready.go:102] pod "cilium-2rdhk" in "kube-system" namespace has status "Ready":"False"
	I0609 01:41:06.980032  352096 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.20.7/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0609 01:41:07.480282  352096 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.20.7/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0609 01:41:07.980274  352096 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.20.7/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0609 01:41:08.480263  352096 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.20.7/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0609 01:41:08.980571  352096 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.20.7/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0609 01:41:09.480813  352096 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.20.7/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0609 01:41:09.980588  352096 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.20.7/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0609 01:41:10.480840  352096 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.20.7/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0609 01:41:10.980186  352096 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.20.7/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0609 01:41:11.480965  352096 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.20.7/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0609 01:41:07.580079  329232 cli_runner.go:115] Run: docker container inspect auto-20210609012809-9941 --format={{.State.Status}}
	I0609 01:41:10.622741  329232 cli_runner.go:115] Run: docker container inspect auto-20210609012809-9941 --format={{.State.Status}}
	I0609 01:41:13.117286  300573 system_pods.go:86] 4 kube-system pods found
	I0609 01:41:13.117320  300573 system_pods.go:89] "coredns-fb8b8dccf-ctgrx" [adffdf7a-c8c3-11eb-a78f-02427f02d9a2] Running
	I0609 01:41:13.117328  300573 system_pods.go:89] "kube-proxy-97rr9" [ae0f0e5a-c8c3-11eb-a78f-02427f02d9a2] Running
	I0609 01:41:13.117340  300573 system_pods.go:89] "metrics-server-8546d8b77b-lqx7b" [afea2287-c8c3-11eb-a78f-02427f02d9a2] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0609 01:41:13.117348  300573 system_pods.go:89] "storage-provisioner" [af32324c-c8c3-11eb-a78f-02427f02d9a2] Running
	I0609 01:41:13.117364  300573 retry.go:31] will retry after 6.402197611s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0609 01:41:13.726560  344705 pod_ready.go:102] pod "cilium-2rdhk" in "kube-system" namespace has status "Ready":"False"
	I0609 01:41:11.980058  352096 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.20.7/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0609 01:41:13.480528  352096 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.20.7/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0609 01:41:13.980786  352096 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.20.7/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0609 01:41:15.479870  352096 ssh_runner.go:189] Completed: sudo /var/lib/minikube/binaries/v1.20.7/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig: (1.499049149s)
	I0609 01:41:15.479969  352096 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.20.7/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0609 01:41:16.480635  352096 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.20.7/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0609 01:41:13.666259  329232 cli_runner.go:115] Run: docker container inspect auto-20210609012809-9941 --format={{.State.Status}}
	I0609 01:41:16.715529  329232 cli_runner.go:115] Run: docker container inspect auto-20210609012809-9941 --format={{.State.Status}}
	I0609 01:41:16.980322  352096 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.20.7/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0609 01:41:17.480064  352096 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.20.7/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0609 01:41:17.980779  352096 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.20.7/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0609 01:41:18.071429  352096 kubeadm.go:985] duration metric: took 16.725453565s to wait for elevateKubeSystemPrivileges.
	I0609 01:41:18.071462  352096 kubeadm.go:392] StartCluster complete in 32.919320287s
	I0609 01:41:18.071483  352096 settings.go:142] acquiring lock: {Name:mk8746ecf7d8ca6a3508d1e45e55db2314c0e73d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0609 01:41:18.071570  352096 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/kubeconfig
	I0609 01:41:18.073757  352096 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/kubeconfig: {Name:mk288d2c4fafd90028bf76db1824dfec28d92db3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0609 01:41:18.664569  352096 kapi.go:244] deployment "coredns" in namespace "kube-system" and context "calico-20210609012810-9941" rescaled to 1
	I0609 01:41:18.664632  352096 start.go:214] Will wait 5m0s for node &{Name: IP:192.168.58.2 Port:8443 KubernetesVersion:v1.20.7 ControlPlane:true Worker:true}
	I0609 01:41:18.664651  352096 ssh_runner.go:149] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.7/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0609 01:41:18.664714  352096 addons.go:342] enableAddons start: toEnable=map[], additional=[]
	I0609 01:41:18.666538  352096 out.go:170] * Verifying Kubernetes components...
	I0609 01:41:18.664779  352096 addons.go:59] Setting storage-provisioner=true in profile "calico-20210609012810-9941"
	I0609 01:41:18.666596  352096 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service kubelet
	I0609 01:41:18.666612  352096 addons.go:135] Setting addon storage-provisioner=true in "calico-20210609012810-9941"
	W0609 01:41:18.666630  352096 addons.go:147] addon storage-provisioner should already be in state true
	I0609 01:41:18.664791  352096 addons.go:59] Setting default-storageclass=true in profile "calico-20210609012810-9941"
	I0609 01:41:18.666671  352096 host.go:66] Checking if "calico-20210609012810-9941" exists ...
	I0609 01:41:18.666676  352096 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "calico-20210609012810-9941"
	I0609 01:41:18.664965  352096 cache.go:108] acquiring lock: {Name:mk2dd9808d496cd84c38482eea9e354a60be2885 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0609 01:41:18.666833  352096 cache.go:116] /home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/cache/images/minikube-local-cache-test_functional-20210609010438-9941 exists
	I0609 01:41:18.666855  352096 cache.go:97] cache image "minikube-local-cache-test:functional-20210609010438-9941" -> "/home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/cache/images/minikube-local-cache-test_functional-20210609010438-9941" took 1.89821ms
	I0609 01:41:18.666869  352096 cache.go:81] save to tar file minikube-local-cache-test:functional-20210609010438-9941 -> /home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/cache/images/minikube-local-cache-test_functional-20210609010438-9941 succeeded
	I0609 01:41:18.666879  352096 cache.go:88] Successfully saved all images to host disk.
	I0609 01:41:18.667046  352096 cli_runner.go:115] Run: docker container inspect calico-20210609012810-9941 --format={{.State.Status}}
	I0609 01:41:18.667251  352096 cli_runner.go:115] Run: docker container inspect calico-20210609012810-9941 --format={{.State.Status}}
	I0609 01:41:18.667265  352096 cli_runner.go:115] Run: docker container inspect calico-20210609012810-9941 --format={{.State.Status}}
	I0609 01:41:18.711328  352096 ssh_runner.go:149] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0609 01:41:18.711376  352096 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20210609012810-9941
	I0609 01:41:16.464152  344705 pod_ready.go:102] pod "cilium-2rdhk" in "kube-system" namespace has status "Ready":"False"
	I0609 01:41:18.919739  344705 pod_ready.go:102] pod "cilium-2rdhk" in "kube-system" namespace has status "Ready":"False"
	I0609 01:41:18.722674  352096 out.go:170]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0609 01:41:18.722788  352096 addons.go:275] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0609 01:41:18.722802  352096 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0609 01:41:18.722851  352096 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20210609012810-9941
	I0609 01:41:18.758518  352096 addons.go:135] Setting addon default-storageclass=true in "calico-20210609012810-9941"
	W0609 01:41:18.758544  352096 addons.go:147] addon default-storageclass should already be in state true
	I0609 01:41:18.758573  352096 host.go:66] Checking if "calico-20210609012810-9941" exists ...
	I0609 01:41:18.759066  352096 cli_runner.go:115] Run: docker container inspect calico-20210609012810-9941 --format={{.State.Status}}
	I0609 01:41:18.770750  352096 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32985 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/machines/calico-20210609012810-9941/id_rsa Username:docker}
	I0609 01:41:18.794220  352096 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32985 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/machines/calico-20210609012810-9941/id_rsa Username:docker}
	I0609 01:41:18.806700  352096 addons.go:275] installing /etc/kubernetes/addons/storageclass.yaml
	I0609 01:41:18.806724  352096 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0609 01:41:18.806770  352096 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20210609012810-9941
	I0609 01:41:18.861723  352096 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32985 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/machines/calico-20210609012810-9941/id_rsa Username:docker}
	I0609 01:41:19.254824  352096 ssh_runner.go:149] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.7/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0609 01:41:19.257472  352096 ssh_runner.go:149] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.7/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0609 01:41:19.269050  352096 ssh_runner.go:149] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.7/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.58.1 host.minikube.internal\n           fallthrough\n        }' | sudo /var/lib/minikube/binaries/v1.20.7/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0609 01:41:19.269206  352096 docker.go:535] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-proxy:v1.20.7
	k8s.gcr.io/kube-apiserver:v1.20.7
	k8s.gcr.io/kube-controller-manager:v1.20.7
	k8s.gcr.io/kube-scheduler:v1.20.7
	gcr.io/k8s-minikube/storage-provisioner:v5
	kubernetesui/dashboard:v2.1.0
	k8s.gcr.io/etcd:3.4.13-0
	k8s.gcr.io/coredns:1.7.0
	kubernetesui/metrics-scraper:v1.0.4
	k8s.gcr.io/pause:3.2
	
	-- /stdout --
	I0609 01:41:19.269224  352096 docker.go:541] minikube-local-cache-test:functional-20210609010438-9941 wasn't preloaded
	I0609 01:41:19.269233  352096 cache_images.go:78] LoadImages start: [minikube-local-cache-test:functional-20210609010438-9941]
	I0609 01:41:19.270563  352096 node_ready.go:35] waiting up to 5m0s for node "calico-20210609012810-9941" to be "Ready" ...
	I0609 01:41:19.270617  352096 image.go:133] retrieving image: minikube-local-cache-test:functional-20210609010438-9941
	I0609 01:41:19.270639  352096 image.go:139] checking repository: index.docker.io/library/minikube-local-cache-test
	I0609 01:41:19.344594  352096 node_ready.go:49] node "calico-20210609012810-9941" has status "Ready":"True"
	I0609 01:41:19.344625  352096 node_ready.go:38] duration metric: took 74.017948ms waiting for node "calico-20210609012810-9941" to be "Ready" ...
	I0609 01:41:19.344637  352096 pod_ready.go:35] extra waiting up to 5m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0609 01:41:19.359631  352096 pod_ready.go:78] waiting up to 5m0s for pod "calico-kube-controllers-55ffdb7658-gltlk" in "kube-system" namespace to be "Ready" ...
	W0609 01:41:20.095801  352096 image.go:146] remote: HEAD https://index.docker.io/v2/library/minikube-local-cache-test/manifests/functional-20210609010438-9941: unexpected status code 401 Unauthorized (HEAD responses have no body, use GET for details)
	I0609 01:41:20.095863  352096 image.go:147] short name: minikube-local-cache-test:functional-20210609010438-9941
	I0609 01:41:20.096813  352096 image.go:175] daemon lookup for minikube-local-cache-test:functional-20210609010438-9941: Error response from daemon: reference does not exist
	I0609 01:41:20.438848  352096 ssh_runner.go:189] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.7/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.18134229s)
	I0609 01:41:20.438935  352096 ssh_runner.go:189] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.7/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.58.1 host.minikube.internal\n           fallthrough\n        }' | sudo /var/lib/minikube/binaries/v1.20.7/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.169850353s)
	I0609 01:41:20.438963  352096 start.go:725] {"host.minikube.internal": 192.168.58.1} host record injected into CoreDNS
	I0609 01:41:20.441405  352096 out.go:170] * Enabled addons: default-storageclass, storage-provisioner
	I0609 01:41:20.441438  352096 addons.go:344] enableAddons completed in 1.776732349s
	W0609 01:41:20.710811  352096 image.go:185] authn lookup for minikube-local-cache-test:functional-20210609010438-9941 (trying anon): GET https://index.docker.io/v2/library/minikube-local-cache-test/manifests/functional-20210609010438-9941: UNAUTHORIZED: authentication required; [map[Action:pull Class: Name:library/minikube-local-cache-test Type:repository]]
	I0609 01:41:21.301766  352096 image.go:189] remote lookup for minikube-local-cache-test:functional-20210609010438-9941: GET https://index.docker.io/v2/library/minikube-local-cache-test/manifests/functional-20210609010438-9941: UNAUTHORIZED: authentication required; [map[Action:pull Class: Name:library/minikube-local-cache-test Type:repository]]
	I0609 01:41:21.301819  352096 image.go:92] error retrieve Image minikube-local-cache-test:functional-20210609010438-9941 ref GET https://index.docker.io/v2/library/minikube-local-cache-test/manifests/functional-20210609010438-9941: UNAUTHORIZED: authentication required; [map[Action:pull Class: Name:library/minikube-local-cache-test Type:repository]] 
	I0609 01:41:21.301851  352096 cache_images.go:106] "minikube-local-cache-test:functional-20210609010438-9941" needs transfer: got empty img digest "" for minikube-local-cache-test:functional-20210609010438-9941
	I0609 01:41:21.301896  352096 docker.go:236] Removing image: minikube-local-cache-test:functional-20210609010438-9941
	I0609 01:41:21.301940  352096 ssh_runner.go:149] Run: docker rmi minikube-local-cache-test:functional-20210609010438-9941
	I0609 01:41:21.448602  352096 pod_ready.go:102] pod "calico-kube-controllers-55ffdb7658-gltlk" in "kube-system" namespace has status "Ready":"False"
	I0609 01:41:21.464097  352096 cache_images.go:279] Loading image from: /home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/cache/images/minikube-local-cache-test_functional-20210609010438-9941
	I0609 01:41:21.464209  352096 ssh_runner.go:149] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/minikube-local-cache-test_functional-20210609010438-9941
	I0609 01:41:21.467662  352096 ssh_runner.go:306] existence check for /var/lib/minikube/images/minikube-local-cache-test_functional-20210609010438-9941: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/minikube-local-cache-test_functional-20210609010438-9941: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot stat '/var/lib/minikube/images/minikube-local-cache-test_functional-20210609010438-9941': No such file or directory
	I0609 01:41:21.467695  352096 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/cache/images/minikube-local-cache-test_functional-20210609010438-9941 --> /var/lib/minikube/images/minikube-local-cache-test_functional-20210609010438-9941 (5120 bytes)
	I0609 01:41:21.553071  352096 docker.go:203] Loading image: /var/lib/minikube/images/minikube-local-cache-test_functional-20210609010438-9941
	I0609 01:41:21.553158  352096 ssh_runner.go:149] Run: docker load -i /var/lib/minikube/images/minikube-local-cache-test_functional-20210609010438-9941
	I0609 01:41:19.755463  329232 cli_runner.go:115] Run: docker container inspect auto-20210609012809-9941 --format={{.State.Status}}
	I0609 01:41:19.524872  300573 system_pods.go:86] 7 kube-system pods found
	I0609 01:41:19.524911  300573 system_pods.go:89] "coredns-fb8b8dccf-ctgrx" [adffdf7a-c8c3-11eb-a78f-02427f02d9a2] Running
	I0609 01:41:19.524921  300573 system_pods.go:89] "kube-apiserver-old-k8s-version-20210609012901-9941" [c8550af5-c8c3-11eb-a78f-02427f02d9a2] Pending
	I0609 01:41:19.524931  300573 system_pods.go:89] "kube-controller-manager-old-k8s-version-20210609012901-9941" [c8ed9fe0-c8c3-11eb-a78f-02427f02d9a2] Pending
	I0609 01:41:19.524938  300573 system_pods.go:89] "kube-proxy-97rr9" [ae0f0e5a-c8c3-11eb-a78f-02427f02d9a2] Running
	I0609 01:41:19.524948  300573 system_pods.go:89] "kube-scheduler-old-k8s-version-20210609012901-9941" [c768fa6f-c8c3-11eb-a78f-02427f02d9a2] Running
	I0609 01:41:19.524961  300573 system_pods.go:89] "metrics-server-8546d8b77b-lqx7b" [afea2287-c8c3-11eb-a78f-02427f02d9a2] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0609 01:41:19.524978  300573 system_pods.go:89] "storage-provisioner" [af32324c-c8c3-11eb-a78f-02427f02d9a2] Running
	I0609 01:41:19.524996  300573 retry.go:31] will retry after 6.062999549s: missing components: etcd, kube-apiserver, kube-controller-manager
	I0609 01:41:21.419636  344705 pod_ready.go:102] pod "cilium-2rdhk" in "kube-system" namespace has status "Ready":"False"
	I0609 01:41:23.919505  344705 pod_ready.go:102] pod "cilium-2rdhk" in "kube-system" namespace has status "Ready":"False"
	I0609 01:41:21.913966  352096 cache_images.go:308] Transferred and loaded /home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/cache/images/minikube-local-cache-test_functional-20210609010438-9941 from cache
	I0609 01:41:21.914009  352096 cache_images.go:113] Successfully loaded all cached images
	I0609 01:41:21.914025  352096 cache_images.go:82] LoadImages completed in 2.644783095s
	I0609 01:41:21.914043  352096 cache_images.go:252] succeeded pushing to: calico-20210609012810-9941
	I0609 01:41:21.914049  352096 cache_images.go:253] failed pushing to: 
	I0609 01:41:23.875804  352096 pod_ready.go:102] pod "calico-kube-controllers-55ffdb7658-gltlk" in "kube-system" namespace has status "Ready":"False"
	I0609 01:41:25.876212  352096 pod_ready.go:102] pod "calico-kube-controllers-55ffdb7658-gltlk" in "kube-system" namespace has status "Ready":"False"
	I0609 01:41:22.798808  329232 cli_runner.go:115] Run: docker container inspect auto-20210609012809-9941 --format={{.State.Status}}
	I0609 01:41:25.839455  329232 cli_runner.go:115] Run: docker container inspect auto-20210609012809-9941 --format={{.State.Status}}
	I0609 01:41:25.592272  300573 system_pods.go:86] 7 kube-system pods found
	I0609 01:41:25.592298  300573 system_pods.go:89] "coredns-fb8b8dccf-ctgrx" [adffdf7a-c8c3-11eb-a78f-02427f02d9a2] Running
	I0609 01:41:25.592304  300573 system_pods.go:89] "kube-apiserver-old-k8s-version-20210609012901-9941" [c8550af5-c8c3-11eb-a78f-02427f02d9a2] Pending
	I0609 01:41:25.592308  300573 system_pods.go:89] "kube-controller-manager-old-k8s-version-20210609012901-9941" [c8ed9fe0-c8c3-11eb-a78f-02427f02d9a2] Pending
	I0609 01:41:25.592311  300573 system_pods.go:89] "kube-proxy-97rr9" [ae0f0e5a-c8c3-11eb-a78f-02427f02d9a2] Running
	I0609 01:41:25.592317  300573 system_pods.go:89] "kube-scheduler-old-k8s-version-20210609012901-9941" [c768fa6f-c8c3-11eb-a78f-02427f02d9a2] Running
	I0609 01:41:25.592325  300573 system_pods.go:89] "metrics-server-8546d8b77b-lqx7b" [afea2287-c8c3-11eb-a78f-02427f02d9a2] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0609 01:41:25.592331  300573 system_pods.go:89] "storage-provisioner" [af32324c-c8c3-11eb-a78f-02427f02d9a2] Running
	I0609 01:41:25.592342  300573 retry.go:31] will retry after 10.504197539s: missing components: etcd, kube-apiserver, kube-controller-manager
	I0609 01:41:25.919767  344705 pod_ready.go:102] pod "cilium-2rdhk" in "kube-system" namespace has status "Ready":"False"
	I0609 01:41:28.419788  344705 pod_ready.go:102] pod "cilium-2rdhk" in "kube-system" namespace has status "Ready":"False"
	I0609 01:41:28.920252  344705 pod_ready.go:92] pod "cilium-2rdhk" in "kube-system" namespace has status "Ready":"True"
	I0609 01:41:28.920277  344705 pod_ready.go:81] duration metric: took 36.018972007s waiting for pod "cilium-2rdhk" in "kube-system" namespace to be "Ready" ...
	I0609 01:41:28.920288  344705 pod_ready.go:78] waiting up to 5m0s for pod "cilium-operator-7c755f4594-2x5fn" in "kube-system" namespace to be "Ready" ...
	I0609 01:41:28.924675  344705 pod_ready.go:92] pod "cilium-operator-7c755f4594-2x5fn" in "kube-system" namespace has status "Ready":"True"
	I0609 01:41:28.924691  344705 pod_ready.go:81] duration metric: took 4.397091ms waiting for pod "cilium-operator-7c755f4594-2x5fn" in "kube-system" namespace to be "Ready" ...
	I0609 01:41:28.924702  344705 pod_ready.go:78] waiting up to 5m0s for pod "coredns-74ff55c5b-42mdv" in "kube-system" namespace to be "Ready" ...
	I0609 01:41:28.929071  344705 pod_ready.go:92] pod "coredns-74ff55c5b-42mdv" in "kube-system" namespace has status "Ready":"True"
	I0609 01:41:28.929091  344705 pod_ready.go:81] duration metric: took 4.382306ms waiting for pod "coredns-74ff55c5b-42mdv" in "kube-system" namespace to be "Ready" ...
	I0609 01:41:28.929102  344705 pod_ready.go:78] waiting up to 5m0s for pod "coredns-74ff55c5b-jv4pl" in "kube-system" namespace to be "Ready" ...
	I0609 01:41:28.931060  344705 pod_ready.go:97] error getting pod "coredns-74ff55c5b-jv4pl" in "kube-system" namespace (skipping!): pods "coredns-74ff55c5b-jv4pl" not found
	I0609 01:41:28.931084  344705 pod_ready.go:81] duration metric: took 1.975143ms waiting for pod "coredns-74ff55c5b-jv4pl" in "kube-system" namespace to be "Ready" ...
	E0609 01:41:28.931095  344705 pod_ready.go:66] WaitExtra: waitPodCondition: error getting pod "coredns-74ff55c5b-jv4pl" in "kube-system" namespace (skipping!): pods "coredns-74ff55c5b-jv4pl" not found
	I0609 01:41:28.931103  344705 pod_ready.go:78] waiting up to 5m0s for pod "etcd-cilium-20210609012810-9941" in "kube-system" namespace to be "Ready" ...
	I0609 01:41:27.876306  352096 pod_ready.go:102] pod "calico-kube-controllers-55ffdb7658-gltlk" in "kube-system" namespace has status "Ready":"False"
	I0609 01:41:30.376138  352096 pod_ready.go:102] pod "calico-kube-controllers-55ffdb7658-gltlk" in "kube-system" namespace has status "Ready":"False"
	I0609 01:41:28.884648  329232 cli_runner.go:115] Run: docker container inspect auto-20210609012809-9941 --format={{.State.Status}}
	I0609 01:41:31.933672  329232 stop.go:59] stop err: Maximum number of retries (60) exceeded
	I0609 01:41:31.933729  329232 delete.go:129] stophost failed (probably ok): Temporary Error: stop: Maximum number of retries (60) exceeded
	I0609 01:41:31.934195  329232 cli_runner.go:115] Run: docker container inspect auto-20210609012809-9941 --format={{.State.Status}}
	W0609 01:41:31.985166  329232 delete.go:135] deletehost failed: Docker machine "auto-20210609012809-9941" does not exist. Use "docker-machine ls" to list machines. Use "docker-machine create" to add a new one.
	I0609 01:41:31.985255  329232 cli_runner.go:115] Run: docker container inspect -f {{.Id}} auto-20210609012809-9941
	I0609 01:41:32.031852  329232 cli_runner.go:115] Run: docker container inspect auto-20210609012809-9941 --format={{.State.Status}}
	I0609 01:41:32.081551  329232 cli_runner.go:115] Run: docker exec --privileged -t auto-20210609012809-9941 /bin/bash -c "sudo init 0"
	W0609 01:41:32.125884  329232 cli_runner.go:162] docker exec --privileged -t auto-20210609012809-9941 /bin/bash -c "sudo init 0" returned with exit code 1
	I0609 01:41:32.125930  329232 oci.go:632] error shutdown auto-20210609012809-9941: docker exec --privileged -t auto-20210609012809-9941 /bin/bash -c "sudo init 0": exit status 1
	stdout:
	
	stderr:
	Error response from daemon: Container bc54bc9bf415ee2bb0df1bcad0aed4e971bd39991c0782ffae750733117660bd is not running
	I0609 01:41:33.127009  329232 cli_runner.go:115] Run: docker container inspect auto-20210609012809-9941 --format={{.State.Status}}
	I0609 01:41:33.188615  329232 oci.go:646] temporary error: container auto-20210609012809-9941 status is  but expect it to be exited
	I0609 01:41:33.188641  329232 oci.go:652] Successfully shutdown container auto-20210609012809-9941
	I0609 01:41:33.188680  329232 cli_runner.go:115] Run: docker rm -f -v auto-20210609012809-9941
	I0609 01:41:33.232875  329232 cli_runner.go:115] Run: docker container inspect -f {{.Id}} auto-20210609012809-9941
	W0609 01:41:33.278916  329232 cli_runner.go:162] docker container inspect -f {{.Id}} auto-20210609012809-9941 returned with exit code 1
	I0609 01:41:33.279004  329232 cli_runner.go:115] Run: docker network inspect auto-20210609012809-9941 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0609 01:41:33.317124  329232 cli_runner.go:162] docker network inspect auto-20210609012809-9941 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0609 01:41:33.317184  329232 network_create.go:255] running [docker network inspect auto-20210609012809-9941] to gather additional debugging logs...
	I0609 01:41:33.317205  329232 cli_runner.go:115] Run: docker network inspect auto-20210609012809-9941
	W0609 01:41:33.354864  329232 cli_runner.go:162] docker network inspect auto-20210609012809-9941 returned with exit code 1
	I0609 01:41:33.354894  329232 network_create.go:258] error running [docker network inspect auto-20210609012809-9941]: docker network inspect auto-20210609012809-9941: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: auto-20210609012809-9941
	I0609 01:41:33.354910  329232 network_create.go:260] output of [docker network inspect auto-20210609012809-9941]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: auto-20210609012809-9941
	
	** /stderr **
	W0609 01:41:33.355033  329232 delete.go:139] delete failed (probably ok) <nil>
	I0609 01:41:33.355043  329232 fix.go:120] Sleeping 1 second for extra luck!
	I0609 01:41:34.355909  329232 start.go:126] createHost starting for "" (driver="docker")
	I0609 01:41:30.941410  344705 pod_ready.go:102] pod "etcd-cilium-20210609012810-9941" in "kube-system" namespace has status "Ready":"False"
	I0609 01:41:32.942019  344705 pod_ready.go:102] pod "etcd-cilium-20210609012810-9941" in "kube-system" namespace has status "Ready":"False"
	I0609 01:41:34.942818  344705 pod_ready.go:102] pod "etcd-cilium-20210609012810-9941" in "kube-system" namespace has status "Ready":"False"
	I0609 01:41:32.377229  352096 pod_ready.go:102] pod "calico-kube-controllers-55ffdb7658-gltlk" in "kube-system" namespace has status "Ready":"False"
	I0609 01:41:34.876436  352096 pod_ready.go:102] pod "calico-kube-controllers-55ffdb7658-gltlk" in "kube-system" namespace has status "Ready":"False"
	I0609 01:41:34.358151  329232 out.go:197] * Creating docker container (CPUs=2, Memory=2048MB) ...
	I0609 01:41:34.358255  329232 start.go:160] libmachine.API.Create for "auto-20210609012809-9941" (driver="docker")
	I0609 01:41:34.358292  329232 client.go:168] LocalClient.Create starting
	I0609 01:41:34.358357  329232 main.go:128] libmachine: Reading certificate data from /home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/certs/ca.pem
	I0609 01:41:34.358386  329232 main.go:128] libmachine: Decoding PEM data...
	I0609 01:41:34.358404  329232 main.go:128] libmachine: Parsing certificate...
	I0609 01:41:34.358508  329232 main.go:128] libmachine: Reading certificate data from /home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/certs/cert.pem
	I0609 01:41:34.358532  329232 main.go:128] libmachine: Decoding PEM data...
	I0609 01:41:34.358541  329232 main.go:128] libmachine: Parsing certificate...
	I0609 01:41:34.358756  329232 cli_runner.go:115] Run: docker network inspect auto-20210609012809-9941 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0609 01:41:34.402255  329232 cli_runner.go:162] docker network inspect auto-20210609012809-9941 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0609 01:41:34.402349  329232 network_create.go:255] running [docker network inspect auto-20210609012809-9941] to gather additional debugging logs...
	I0609 01:41:34.402373  329232 cli_runner.go:115] Run: docker network inspect auto-20210609012809-9941
	W0609 01:41:34.447755  329232 cli_runner.go:162] docker network inspect auto-20210609012809-9941 returned with exit code 1
	I0609 01:41:34.447782  329232 network_create.go:258] error running [docker network inspect auto-20210609012809-9941]: docker network inspect auto-20210609012809-9941: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: auto-20210609012809-9941
	I0609 01:41:34.447793  329232 network_create.go:260] output of [docker network inspect auto-20210609012809-9941]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: auto-20210609012809-9941
	
	** /stderr **
	I0609 01:41:34.447829  329232 cli_runner.go:115] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0609 01:41:34.487524  329232 network.go:215] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName:br-3efa0710be1e IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:02:42:6f:c7:95:89}}
	I0609 01:41:34.488287  329232 network.go:215] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 Interface:{IfaceName:br-494a1c72530c IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:02:42:2d:51:70:a3}}
	I0609 01:41:34.489047  329232 network.go:215] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 Interface:{IfaceName:br-3b40e12707af IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:02:42:ac:37:f7:3a}}
	I0609 01:41:34.489905  329232 network.go:263] reserving subnet 192.168.76.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[] amended:true}} dirty:map[192.168.49.0:0xc000136218 192.168.76.0:0xc000408548] misses:0}
	I0609 01:41:34.489944  329232 network.go:210] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0609 01:41:34.489977  329232 network_create.go:106] attempt to create docker network auto-20210609012809-9941 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500 ...
	I0609 01:41:34.490049  329232 cli_runner.go:115] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true auto-20210609012809-9941
	I0609 01:41:34.563866  329232 network_create.go:90] docker network auto-20210609012809-9941 192.168.76.0/24 created
	I0609 01:41:34.563896  329232 kic.go:106] calculated static IP "192.168.76.2" for the "auto-20210609012809-9941" container
	I0609 01:41:34.563950  329232 cli_runner.go:115] Run: docker ps -a --format {{.Names}}
	I0609 01:41:34.605010  329232 cli_runner.go:115] Run: docker volume create auto-20210609012809-9941 --label name.minikube.sigs.k8s.io=auto-20210609012809-9941 --label created_by.minikube.sigs.k8s.io=true
	I0609 01:41:34.642891  329232 oci.go:102] Successfully created a docker volume auto-20210609012809-9941
	I0609 01:41:34.642974  329232 cli_runner.go:115] Run: docker run --rm --name auto-20210609012809-9941-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=auto-20210609012809-9941 --entrypoint /usr/bin/test -v auto-20210609012809-9941:/var gcr.io/k8s-minikube/kicbase:v0.0.23@sha256:baf6d94b2050bcbecd98994e265cf965a4f4768978620ccf5227a6dcb75ade45 -d /var/lib
	I0609 01:41:35.363820  329232 oci.go:106] Successfully prepared a docker volume auto-20210609012809-9941
	W0609 01:41:35.363866  329232 oci.go:135] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W0609 01:41:35.363875  329232 oci.go:119] Your kernel does not support memory limit capabilities or the cgroup is not mounted.
	I0609 01:41:35.363883  329232 preload.go:110] Checking if preload exists for k8s version v1.20.7 and runtime docker
	I0609 01:41:35.363916  329232 kic.go:179] Starting extracting preloaded images to volume ...
	I0609 01:41:35.363930  329232 cli_runner.go:115] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0609 01:41:35.363995  329232 cli_runner.go:115] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v11-v1.20.7-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v auto-20210609012809-9941:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.23@sha256:baf6d94b2050bcbecd98994e265cf965a4f4768978620ccf5227a6dcb75ade45 -I lz4 -xf /preloaded.tar -C /extractDir
	I0609 01:41:35.467993  329232 cli_runner.go:115] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname auto-20210609012809-9941 --name auto-20210609012809-9941 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=auto-20210609012809-9941 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=auto-20210609012809-9941 --network auto-20210609012809-9941 --ip 192.168.76.2 --volume auto-20210609012809-9941:/var --security-opt apparmor=unconfined --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase:v0.0.23@sha256:baf6d94b2050bcbecd98994e265cf965a4f4768978620ccf5227a6dcb75ade45
	I0609 01:41:35.995981  329232 cli_runner.go:115] Run: docker container inspect auto-20210609012809-9941 --format={{.State.Running}}
	I0609 01:41:36.052103  329232 cli_runner.go:115] Run: docker container inspect auto-20210609012809-9941 --format={{.State.Status}}
	I0609 01:41:36.105861  329232 cli_runner.go:115] Run: docker exec auto-20210609012809-9941 stat /var/lib/dpkg/alternatives/iptables
	I0609 01:41:36.272972  329232 oci.go:278] the created container "auto-20210609012809-9941" has a running status.
	I0609 01:41:36.273013  329232 kic.go:210] Creating ssh key for kic: /home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/machines/auto-20210609012809-9941/id_rsa...
	I0609 01:41:36.425757  329232 kic_runner.go:188] docker (temp): /home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/machines/auto-20210609012809-9941/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0609 01:41:36.825610  329232 cli_runner.go:115] Run: docker container inspect auto-20210609012809-9941 --format={{.State.Status}}
	I0609 01:41:36.868189  329232 kic_runner.go:94] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0609 01:41:36.868214  329232 kic_runner.go:115] Args: [docker exec --privileged auto-20210609012809-9941 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0609 01:41:36.102263  300573 system_pods.go:86] 8 kube-system pods found
	I0609 01:41:36.102300  300573 system_pods.go:89] "coredns-fb8b8dccf-ctgrx" [adffdf7a-c8c3-11eb-a78f-02427f02d9a2] Running
	I0609 01:41:36.102308  300573 system_pods.go:89] "etcd-old-k8s-version-20210609012901-9941" [d1de6264-c8c3-11eb-a78f-02427f02d9a2] Pending
	I0609 01:41:36.102315  300573 system_pods.go:89] "kube-apiserver-old-k8s-version-20210609012901-9941" [c8550af5-c8c3-11eb-a78f-02427f02d9a2] Running
	I0609 01:41:36.102323  300573 system_pods.go:89] "kube-controller-manager-old-k8s-version-20210609012901-9941" [c8ed9fe0-c8c3-11eb-a78f-02427f02d9a2] Running
	I0609 01:41:36.102329  300573 system_pods.go:89] "kube-proxy-97rr9" [ae0f0e5a-c8c3-11eb-a78f-02427f02d9a2] Running
	I0609 01:41:36.102336  300573 system_pods.go:89] "kube-scheduler-old-k8s-version-20210609012901-9941" [c768fa6f-c8c3-11eb-a78f-02427f02d9a2] Running
	I0609 01:41:36.102347  300573 system_pods.go:89] "metrics-server-8546d8b77b-lqx7b" [afea2287-c8c3-11eb-a78f-02427f02d9a2] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0609 01:41:36.102364  300573 system_pods.go:89] "storage-provisioner" [af32324c-c8c3-11eb-a78f-02427f02d9a2] Running
	I0609 01:41:36.102381  300573 retry.go:31] will retry after 12.194240946s: missing components: etcd
	I0609 01:41:37.093269  344705 pod_ready.go:102] pod "etcd-cilium-20210609012810-9941" in "kube-system" namespace has status "Ready":"False"
	I0609 01:41:39.442809  344705 pod_ready.go:102] pod "etcd-cilium-20210609012810-9941" in "kube-system" namespace has status "Ready":"False"
	I0609 01:41:39.940516  344705 pod_ready.go:92] pod "etcd-cilium-20210609012810-9941" in "kube-system" namespace has status "Ready":"True"
	I0609 01:41:39.940545  344705 pod_ready.go:81] duration metric: took 11.009433469s waiting for pod "etcd-cilium-20210609012810-9941" in "kube-system" namespace to be "Ready" ...
	I0609 01:41:39.940560  344705 pod_ready.go:78] waiting up to 5m0s for pod "kube-apiserver-cilium-20210609012810-9941" in "kube-system" namespace to be "Ready" ...
	I0609 01:41:39.944617  344705 pod_ready.go:92] pod "kube-apiserver-cilium-20210609012810-9941" in "kube-system" namespace has status "Ready":"True"
	I0609 01:41:39.944633  344705 pod_ready.go:81] duration metric: took 4.066455ms waiting for pod "kube-apiserver-cilium-20210609012810-9941" in "kube-system" namespace to be "Ready" ...
	I0609 01:41:39.944642  344705 pod_ready.go:78] waiting up to 5m0s for pod "kube-controller-manager-cilium-20210609012810-9941" in "kube-system" namespace to be "Ready" ...
	I0609 01:41:37.080706  352096 pod_ready.go:102] pod "calico-kube-controllers-55ffdb7658-gltlk" in "kube-system" namespace has status "Ready":"False"
	I0609 01:41:39.379466  352096 pod_ready.go:102] pod "calico-kube-controllers-55ffdb7658-gltlk" in "kube-system" namespace has status "Ready":"False"
	I0609 01:41:41.383974  352096 pod_ready.go:102] pod "calico-kube-controllers-55ffdb7658-gltlk" in "kube-system" namespace has status "Ready":"False"
	I0609 01:41:39.584397  329232 cli_runner.go:168] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v11-v1.20.7-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v auto-20210609012809-9941:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.23@sha256:baf6d94b2050bcbecd98994e265cf965a4f4768978620ccf5227a6dcb75ade45 -I lz4 -xf /preloaded.tar -C /extractDir: (4.220346647s)
	I0609 01:41:39.584427  329232 kic.go:188] duration metric: took 4.220510 seconds to extract preloaded images to volume
	I0609 01:41:39.584497  329232 cli_runner.go:115] Run: docker container inspect auto-20210609012809-9941 --format={{.State.Status}}
	I0609 01:41:39.635769  329232 machine.go:88] provisioning docker machine ...
	I0609 01:41:39.635827  329232 ubuntu.go:169] provisioning hostname "auto-20210609012809-9941"
	I0609 01:41:39.635904  329232 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-20210609012809-9941
	I0609 01:41:39.684460  329232 main.go:128] libmachine: Using SSH client type: native
	I0609 01:41:39.684645  329232 main.go:128] libmachine: &{{{<nil> 0 [] [] []} docker [0x802f80] 0x802f40 <nil>  [] 0s} 127.0.0.1 32990 <nil> <nil>}
	I0609 01:41:39.684660  329232 main.go:128] libmachine: About to run SSH command:
	sudo hostname auto-20210609012809-9941 && echo "auto-20210609012809-9941" | sudo tee /etc/hostname
	I0609 01:41:39.841506  329232 main.go:128] libmachine: SSH cmd err, output: <nil>: auto-20210609012809-9941
	
	I0609 01:41:39.841577  329232 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-20210609012809-9941
	I0609 01:41:39.885725  329232 main.go:128] libmachine: Using SSH client type: native
	I0609 01:41:39.885870  329232 main.go:128] libmachine: &{{{<nil> 0 [] [] []} docker [0x802f80] 0x802f40 <nil>  [] 0s} 127.0.0.1 32990 <nil> <nil>}
	I0609 01:41:39.885889  329232 main.go:128] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sauto-20210609012809-9941' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 auto-20210609012809-9941/g' /etc/hosts;
				else 
					echo '127.0.1.1 auto-20210609012809-9941' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0609 01:41:40.009081  329232 main.go:128] libmachine: SSH cmd err, output: <nil>: 
	I0609 01:41:40.009113  329232 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube CaCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/certs/key.pem ServerCertR
emotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube}
	I0609 01:41:40.009136  329232 ubuntu.go:177] setting up certificates
	I0609 01:41:40.009147  329232 provision.go:83] configureAuth start
	I0609 01:41:40.009201  329232 cli_runner.go:115] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" auto-20210609012809-9941
	I0609 01:41:40.054568  329232 provision.go:137] copyHostCerts
	I0609 01:41:40.054639  329232 exec_runner.go:145] found /home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/ca.pem, removing ...
	I0609 01:41:40.054650  329232 exec_runner.go:190] rm: /home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/ca.pem
	I0609 01:41:40.054702  329232 exec_runner.go:152] cp: /home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/ca.pem (1082 bytes)
	I0609 01:41:40.054772  329232 exec_runner.go:145] found /home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/cert.pem, removing ...
	I0609 01:41:40.054816  329232 exec_runner.go:190] rm: /home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/cert.pem
	I0609 01:41:40.054836  329232 exec_runner.go:152] cp: /home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/cert.pem (1123 bytes)
	I0609 01:41:40.054888  329232 exec_runner.go:145] found /home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/key.pem, removing ...
	I0609 01:41:40.054896  329232 exec_runner.go:190] rm: /home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/key.pem
	I0609 01:41:40.054916  329232 exec_runner.go:152] cp: /home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/key.pem (1679 bytes)
	I0609 01:41:40.054956  329232 provision.go:111] generating server cert: /home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/certs/ca-key.pem org=jenkins.auto-20210609012809-9941 san=[192.168.76.2 127.0.0.1 localhost 127.0.0.1 minikube auto-20210609012809-9941]
	I0609 01:41:40.199140  329232 provision.go:171] copyRemoteCerts
	I0609 01:41:40.199207  329232 ssh_runner.go:149] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0609 01:41:40.199267  329232 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-20210609012809-9941
	I0609 01:41:40.240189  329232 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32990 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/machines/auto-20210609012809-9941/id_rsa Username:docker}
	I0609 01:41:40.339747  329232 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0609 01:41:40.358551  329232 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0609 01:41:40.377700  329232 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I0609 01:41:40.396157  329232 provision.go:86] duration metric: configureAuth took 386.999034ms
	I0609 01:41:40.396180  329232 ubuntu.go:193] setting minikube options for container-runtime
	I0609 01:41:40.396396  329232 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-20210609012809-9941
	I0609 01:41:40.437678  329232 main.go:128] libmachine: Using SSH client type: native
	I0609 01:41:40.437928  329232 main.go:128] libmachine: &{{{<nil> 0 [] [] []} docker [0x802f80] 0x802f40 <nil>  [] 0s} 127.0.0.1 32990 <nil> <nil>}
	I0609 01:41:40.437947  329232 main.go:128] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0609 01:41:40.565938  329232 main.go:128] libmachine: SSH cmd err, output: <nil>: overlay
	
	I0609 01:41:40.565966  329232 ubuntu.go:71] root file system type: overlay
	I0609 01:41:40.566224  329232 provision.go:308] Updating docker unit: /lib/systemd/system/docker.service ...
	I0609 01:41:40.566318  329232 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-20210609012809-9941
	I0609 01:41:40.609110  329232 main.go:128] libmachine: Using SSH client type: native
	I0609 01:41:40.609254  329232 main.go:128] libmachine: &{{{<nil> 0 [] [] []} docker [0x802f80] 0x802f40 <nil>  [] 0s} 127.0.0.1 32990 <nil> <nil>}
	I0609 01:41:40.609318  329232 main.go:128] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0609 01:41:40.742784  329232 main.go:128] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0609 01:41:40.742865  329232 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-20210609012809-9941
	I0609 01:41:40.799645  329232 main.go:128] libmachine: Using SSH client type: native
	I0609 01:41:40.799898  329232 main.go:128] libmachine: &{{{<nil> 0 [] [] []} docker [0x802f80] 0x802f40 <nil>  [] 0s} 127.0.0.1 32990 <nil> <nil>}
	I0609 01:41:40.799934  329232 main.go:128] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0609 01:41:41.471089  329232 main.go:128] libmachine: SSH cmd err, output: <nil>: --- /lib/systemd/system/docker.service	2021-06-02 11:54:50.000000000 +0000
	+++ /lib/systemd/system/docker.service.new	2021-06-09 01:41:40.733754700 +0000
	@@ -1,30 +1,32 @@
	 [Unit]
	 Description=Docker Application Container Engine
	 Documentation=https://docs.docker.com
	+BindsTo=containerd.service
	 After=network-online.target firewalld.service containerd.service
	 Wants=network-online.target
	-Requires=docker.socket containerd.service
	+Requires=docker.socket
	+StartLimitBurst=3
	+StartLimitIntervalSec=60
	 
	 [Service]
	 Type=notify
	-# the default is not to use systemd for cgroups because the delegate issues still
	-# exists and systemd currently does not support the cgroup feature set required
	-# for containers run by docker
	-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	-ExecReload=/bin/kill -s HUP $MAINPID
	-TimeoutSec=0
	-RestartSec=2
	-Restart=always
	-
	-# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229.
	-# Both the old, and new location are accepted by systemd 229 and up, so using the old location
	-# to make them work for either version of systemd.
	-StartLimitBurst=3
	+Restart=on-failure
	 
	-# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230.
	-# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make
	-# this option work for either version of systemd.
	-StartLimitInterval=60s
	+
	+
	+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	+# The base configuration already specifies an 'ExecStart=...' command. The first directive
	+# here is to clear out that command inherited from the base configuration. Without this,
	+# the command from the base configuration and the command specified here are treated as
	+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	+# will catch this invalid input and refuse to start the service with an error like:
	+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	+
	+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	+ExecStart=
	+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	+ExecReload=/bin/kill -s HUP $MAINPID
	 
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	@@ -32,16 +34,16 @@
	 LimitNPROC=infinity
	 LimitCORE=infinity
	 
	-# Comment TasksMax if your systemd version does not support it.
	-# Only systemd 226 and above support this option.
	+# Uncomment TasksMax if your systemd version supports it.
	+# Only systemd 226 and above support this version.
	 TasksMax=infinity
	+TimeoutStartSec=0
	 
	 # set delegate yes so that systemd does not reset the cgroups of docker containers
	 Delegate=yes
	 
	 # kill only the docker process, not all processes in the cgroup
	 KillMode=process
	-OOMScoreAdjust=-500
	 
	 [Install]
	 WantedBy=multi-user.target
	Synchronizing state of docker.service with SysV service script with /lib/systemd/systemd-sysv-install.
	Executing: /lib/systemd/systemd-sysv-install enable docker
	
	I0609 01:41:41.471128  329232 machine.go:91] provisioned docker machine in 1.835332676s
	I0609 01:41:41.471143  329232 client.go:171] LocalClient.Create took 7.112842351s
	I0609 01:41:41.471164  329232 start.go:168] duration metric: libmachine.API.Create for "auto-20210609012809-9941" took 7.112906767s
	I0609 01:41:41.471179  329232 start.go:267] post-start starting for "auto-20210609012809-9941" (driver="docker")
	I0609 01:41:41.471186  329232 start.go:277] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0609 01:41:41.471252  329232 ssh_runner.go:149] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0609 01:41:41.471302  329232 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-20210609012809-9941
	I0609 01:41:41.519729  329232 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32990 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/machines/auto-20210609012809-9941/id_rsa Username:docker}
	I0609 01:41:41.609111  329232 ssh_runner.go:149] Run: cat /etc/os-release
	I0609 01:41:41.611701  329232 main.go:128] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0609 01:41:41.611732  329232 main.go:128] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0609 01:41:41.611740  329232 main.go:128] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0609 01:41:41.611745  329232 info.go:137] Remote host: Ubuntu 20.04.2 LTS
	I0609 01:41:41.611753  329232 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/addons for local assets ...
	I0609 01:41:41.611793  329232 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/files for local assets ...
	I0609 01:41:41.611879  329232 start.go:270] post-start completed in 140.693775ms
	I0609 01:41:41.612136  329232 cli_runner.go:115] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" auto-20210609012809-9941
	I0609 01:41:41.660654  329232 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/profiles/auto-20210609012809-9941/config.json ...
	I0609 01:41:41.660931  329232 ssh_runner.go:149] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0609 01:41:41.660996  329232 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-20210609012809-9941
	I0609 01:41:41.708265  329232 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32990 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/machines/auto-20210609012809-9941/id_rsa Username:docker}
	I0609 01:41:41.793790  329232 start.go:129] duration metric: createHost completed in 7.437849081s
	I0609 01:41:41.793878  329232 cli_runner.go:115] Run: docker container inspect auto-20210609012809-9941 --format={{.State.Status}}
	W0609 01:41:41.834734  329232 fix.go:134] unexpected machine state, will restart: <nil>
	I0609 01:41:41.834764  329232 machine.go:88] provisioning docker machine ...
	I0609 01:41:41.834786  329232 ubuntu.go:169] provisioning hostname "auto-20210609012809-9941"
	I0609 01:41:41.834833  329232 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-20210609012809-9941
	I0609 01:41:41.879476  329232 main.go:128] libmachine: Using SSH client type: native
	I0609 01:41:41.879641  329232 main.go:128] libmachine: &{{{<nil> 0 [] [] []} docker [0x802f80] 0x802f40 <nil>  [] 0s} 127.0.0.1 32990 <nil> <nil>}
	I0609 01:41:41.879661  329232 main.go:128] libmachine: About to run SSH command:
	sudo hostname auto-20210609012809-9941 && echo "auto-20210609012809-9941" | sudo tee /etc/hostname
	I0609 01:41:42.011151  329232 main.go:128] libmachine: SSH cmd err, output: <nil>: auto-20210609012809-9941
	
	I0609 01:41:42.011225  329232 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-20210609012809-9941
	I0609 01:41:42.061407  329232 main.go:128] libmachine: Using SSH client type: native
	I0609 01:41:42.061641  329232 main.go:128] libmachine: &{{{<nil> 0 [] [] []} docker [0x802f80] 0x802f40 <nil>  [] 0s} 127.0.0.1 32990 <nil> <nil>}
	I0609 01:41:42.061675  329232 main.go:128] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sauto-20210609012809-9941' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 auto-20210609012809-9941/g' /etc/hosts;
				else 
					echo '127.0.1.1 auto-20210609012809-9941' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0609 01:41:42.184948  329232 main.go:128] libmachine: SSH cmd err, output: <nil>: 
	I0609 01:41:42.184977  329232 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube CaCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/certs/key.pem ServerCertR
emotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube}
	I0609 01:41:42.185001  329232 ubuntu.go:177] setting up certificates
	I0609 01:41:42.185011  329232 provision.go:83] configureAuth start
	I0609 01:41:42.185062  329232 cli_runner.go:115] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" auto-20210609012809-9941
	I0609 01:41:42.223424  329232 provision.go:137] copyHostCerts
	I0609 01:41:42.223473  329232 exec_runner.go:145] found /home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/ca.pem, removing ...
	I0609 01:41:42.223480  329232 exec_runner.go:190] rm: /home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/ca.pem
	I0609 01:41:42.223524  329232 exec_runner.go:152] cp: /home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/ca.pem (1082 bytes)
	I0609 01:41:42.223592  329232 exec_runner.go:145] found /home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/cert.pem, removing ...
	I0609 01:41:42.223605  329232 exec_runner.go:190] rm: /home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/cert.pem
	I0609 01:41:42.223629  329232 exec_runner.go:152] cp: /home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/cert.pem (1123 bytes)
	I0609 01:41:42.223679  329232 exec_runner.go:145] found /home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/key.pem, removing ...
	I0609 01:41:42.223689  329232 exec_runner.go:190] rm: /home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/key.pem
	I0609 01:41:42.223706  329232 exec_runner.go:152] cp: /home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/key.pem (1679 bytes)
	I0609 01:41:42.223802  329232 provision.go:111] generating server cert: /home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/certs/ca-key.pem org=jenkins.auto-20210609012809-9941 san=[192.168.76.2 127.0.0.1 localhost 127.0.0.1 minikube auto-20210609012809-9941]
	I0609 01:41:42.486214  329232 provision.go:171] copyRemoteCerts
	I0609 01:41:42.486276  329232 ssh_runner.go:149] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0609 01:41:42.486327  329232 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-20210609012809-9941
	I0609 01:41:42.526157  329232 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32990 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/machines/auto-20210609012809-9941/id_rsa Username:docker}
	I0609 01:41:42.612850  329232 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0609 01:41:42.630046  329232 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0609 01:41:42.647341  329232 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/machines/server.pem --> /etc/docker/server.pem (1245 bytes)
	I0609 01:41:42.663823  329232 provision.go:86] duration metric: configureAuth took 478.797993ms
	I0609 01:41:42.663855  329232 ubuntu.go:193] setting minikube options for container-runtime
	I0609 01:41:42.664049  329232 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-20210609012809-9941
	I0609 01:41:42.708962  329232 main.go:128] libmachine: Using SSH client type: native
	I0609 01:41:42.709147  329232 main.go:128] libmachine: &{{{<nil> 0 [] [] []} docker [0x802f80] 0x802f40 <nil>  [] 0s} 127.0.0.1 32990 <nil> <nil>}
	I0609 01:41:42.709164  329232 main.go:128] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0609 01:41:42.837104  329232 main.go:128] libmachine: SSH cmd err, output: <nil>: overlay
	
	I0609 01:41:42.837131  329232 ubuntu.go:71] root file system type: overlay
	I0609 01:41:42.837293  329232 provision.go:308] Updating docker unit: /lib/systemd/system/docker.service ...
	I0609 01:41:42.837345  329232 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-20210609012809-9941
	I0609 01:41:42.884564  329232 main.go:128] libmachine: Using SSH client type: native
	I0609 01:41:42.884726  329232 main.go:128] libmachine: &{{{<nil> 0 [] [] []} docker [0x802f80] 0x802f40 <nil>  [] 0s} 127.0.0.1 32990 <nil> <nil>}
	I0609 01:41:42.884819  329232 main.go:128] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0609 01:41:43.017785  329232 main.go:128] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0609 01:41:43.017862  329232 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-20210609012809-9941
	I0609 01:41:43.058769  329232 main.go:128] libmachine: Using SSH client type: native
	I0609 01:41:43.058909  329232 main.go:128] libmachine: &{{{<nil> 0 [] [] []} docker [0x802f80] 0x802f40 <nil>  [] 0s} 127.0.0.1 32990 <nil> <nil>}
	I0609 01:41:43.058927  329232 main.go:128] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0609 01:41:43.180717  329232 main.go:128] libmachine: SSH cmd err, output: <nil>: 
	I0609 01:41:43.180750  329232 machine.go:91] provisioned docker machine in 1.345979023s
	I0609 01:41:43.180763  329232 start.go:267] post-start starting for "auto-20210609012809-9941" (driver="docker")
	I0609 01:41:43.180773  329232 start.go:277] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0609 01:41:43.180829  329232 ssh_runner.go:149] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0609 01:41:43.180871  329232 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-20210609012809-9941
	I0609 01:41:43.220933  329232 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32990 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/machines/auto-20210609012809-9941/id_rsa Username:docker}
	I0609 01:41:43.308831  329232 ssh_runner.go:149] Run: cat /etc/os-release
	I0609 01:41:43.311629  329232 main.go:128] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0609 01:41:43.311653  329232 main.go:128] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0609 01:41:43.311664  329232 main.go:128] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0609 01:41:43.311671  329232 info.go:137] Remote host: Ubuntu 20.04.2 LTS
	I0609 01:41:43.311681  329232 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/addons for local assets ...
	I0609 01:41:43.311732  329232 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/files for local assets ...
	I0609 01:41:43.311850  329232 start.go:270] post-start completed in 131.0789ms
	I0609 01:41:43.311895  329232 ssh_runner.go:149] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0609 01:41:43.311938  329232 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-20210609012809-9941
	I0609 01:41:43.351864  329232 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32990 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/machines/auto-20210609012809-9941/id_rsa Username:docker}
	I0609 01:41:43.439589  329232 fix.go:57] fixHost completed within 3m18.46145985s
	I0609 01:41:43.439614  329232 start.go:80] releasing machines lock for "auto-20210609012809-9941", held for 3m18.461506998s
	I0609 01:41:43.439689  329232 cli_runner.go:115] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" auto-20210609012809-9941
	I0609 01:41:43.480908  329232 ssh_runner.go:149] Run: sudo service containerd status
	I0609 01:41:43.480953  329232 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-20210609012809-9941
	I0609 01:41:43.480998  329232 ssh_runner.go:149] Run: curl -sS -m 2 https://k8s.gcr.io/
	I0609 01:41:43.481050  329232 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-20210609012809-9941
	I0609 01:41:43.523337  329232 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32990 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/machines/auto-20210609012809-9941/id_rsa Username:docker}
	I0609 01:41:43.523672  329232 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32990 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/machines/auto-20210609012809-9941/id_rsa Username:docker}
	I0609 01:41:43.625901  329232 ssh_runner.go:149] Run: sudo systemctl cat docker.service
	I0609 01:41:43.634199  329232 cruntime.go:225] skipping containerd shutdown because we are bound to it
	I0609 01:41:43.634259  329232 ssh_runner.go:149] Run: sudo service crio status
	I0609 01:41:43.651967  329232 ssh_runner.go:149] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/dockershim.sock
	image-endpoint: unix:///var/run/dockershim.sock
	" | sudo tee /etc/crictl.yaml"
	I0609 01:41:43.663538  329232 ssh_runner.go:149] Run: sudo systemctl cat docker.service
	I0609 01:41:43.671774  329232 ssh_runner.go:149] Run: sudo service docker status
	I0609 01:41:43.685805  329232 ssh_runner.go:149] Run: docker version --format {{.Server.Version}}
	I0609 01:41:41.955318  344705 pod_ready.go:102] pod "kube-controller-manager-cilium-20210609012810-9941" in "kube-system" namespace has status "Ready":"False"
	I0609 01:41:44.454390  344705 pod_ready.go:102] pod "kube-controller-manager-cilium-20210609012810-9941" in "kube-system" namespace has status "Ready":"False"
	I0609 01:41:43.733795  329232 out.go:197] * Preparing Kubernetes v1.20.7 on Docker 20.10.7 ...
	I0609 01:41:43.733887  329232 cli_runner.go:115] Run: docker network inspect auto-20210609012809-9941 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0609 01:41:43.781233  329232 ssh_runner.go:149] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I0609 01:41:43.784669  329232 ssh_runner.go:149] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0609 01:41:43.794580  329232 localpath.go:92] copying /home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/client.crt -> /home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/profiles/auto-20210609012809-9941/client.crt
	I0609 01:41:43.794703  329232 localpath.go:117] copying /home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/client.key -> /home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/profiles/auto-20210609012809-9941/client.key
	I0609 01:41:43.794837  329232 preload.go:110] Checking if preload exists for k8s version v1.20.7 and runtime docker
	I0609 01:41:43.794899  329232 ssh_runner.go:149] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0609 01:41:43.836439  329232 docker.go:535] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-proxy:v1.20.7
	k8s.gcr.io/kube-controller-manager:v1.20.7
	k8s.gcr.io/kube-apiserver:v1.20.7
	k8s.gcr.io/kube-scheduler:v1.20.7
	gcr.io/k8s-minikube/storage-provisioner:v5
	kubernetesui/dashboard:v2.1.0
	k8s.gcr.io/etcd:3.4.13-0
	k8s.gcr.io/coredns:1.7.0
	kubernetesui/metrics-scraper:v1.0.4
	k8s.gcr.io/pause:3.2
	
	-- /stdout --
	I0609 01:41:43.836465  329232 docker.go:466] Images already preloaded, skipping extraction
	I0609 01:41:43.836518  329232 ssh_runner.go:149] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0609 01:41:43.874900  329232 docker.go:535] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-proxy:v1.20.7
	k8s.gcr.io/kube-controller-manager:v1.20.7
	k8s.gcr.io/kube-apiserver:v1.20.7
	k8s.gcr.io/kube-scheduler:v1.20.7
	gcr.io/k8s-minikube/storage-provisioner:v5
	kubernetesui/dashboard:v2.1.0
	k8s.gcr.io/etcd:3.4.13-0
	k8s.gcr.io/coredns:1.7.0
	kubernetesui/metrics-scraper:v1.0.4
	k8s.gcr.io/pause:3.2
	
	-- /stdout --
	I0609 01:41:43.874929  329232 cache_images.go:74] Images are preloaded, skipping loading
	I0609 01:41:43.874987  329232 ssh_runner.go:149] Run: docker info --format {{.CgroupDriver}}
	I0609 01:41:43.959341  329232 cni.go:93] Creating CNI manager for ""
	I0609 01:41:43.959363  329232 cni.go:167] CNI unnecessary in this configuration, recommending no CNI
	I0609 01:41:43.959373  329232 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0609 01:41:43.959385  329232 kubeadm.go:153] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.20.7 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:auto-20210609012809-9941 NodeName:auto-20210609012809-9941 DNSDomain:cluster.local CRISocket:/var/run/dockershim.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NoTaintMaster:true NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/miniku
be/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[]}
	I0609 01:41:43.959528  329232 kubeadm.go:157] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/dockershim.sock
	  name: "auto-20210609012809-9941"
	  kubeletExtraArgs:
	    node-ip: 192.168.76.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.7
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	
	I0609 01:41:43.959623  329232 kubeadm.go:909] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.7/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=docker --hostname-override=auto-20210609012809-9941 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.7 ClusterName:auto-20210609012809-9941 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0609 01:41:43.959678  329232 ssh_runner.go:149] Run: sudo ls /var/lib/minikube/binaries/v1.20.7
	I0609 01:41:43.966644  329232 binaries.go:44] Found k8s binaries, skipping transfer
	I0609 01:41:43.966767  329232 ssh_runner.go:149] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /var/lib/minikube /etc/init.d
	I0609 01:41:43.973306  329232 ssh_runner.go:316] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (350 bytes)
	I0609 01:41:43.985377  329232 ssh_runner.go:316] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0609 01:41:43.996832  329232 ssh_runner.go:316] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (1883 bytes)
	I0609 01:41:44.008194  329232 ssh_runner.go:316] scp memory --> /var/lib/minikube/openrc-restart-wrapper.sh (233 bytes)
	I0609 01:41:44.019580  329232 ssh_runner.go:316] scp memory --> /etc/init.d/kubelet (839 bytes)
	I0609 01:41:44.031187  329232 ssh_runner.go:149] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I0609 01:41:44.033902  329232 ssh_runner.go:149] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0609 01:41:44.042089  329232 certs.go:52] Setting up /home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/profiles/auto-20210609012809-9941 for IP: 192.168.76.2
	I0609 01:41:44.042136  329232 certs.go:179] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/ca.key
	I0609 01:41:44.042171  329232 certs.go:179] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/proxy-client-ca.key
	I0609 01:41:44.042229  329232 certs.go:290] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/profiles/auto-20210609012809-9941/client.key
	I0609 01:41:44.042250  329232 certs.go:294] generating minikube signed cert: /home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/profiles/auto-20210609012809-9941/apiserver.key.31bdca25
	I0609 01:41:44.042257  329232 crypto.go:69] Generating cert /home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/profiles/auto-20210609012809-9941/apiserver.crt.31bdca25 with IP's: [192.168.76.2 10.96.0.1 127.0.0.1 10.0.0.1]
	I0609 01:41:44.226573  329232 crypto.go:157] Writing cert to /home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/profiles/auto-20210609012809-9941/apiserver.crt.31bdca25 ...
	I0609 01:41:44.226606  329232 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/profiles/auto-20210609012809-9941/apiserver.crt.31bdca25: {Name:mk90ec242a66bfd79902e518464ceb62421bad6d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0609 01:41:44.226771  329232 crypto.go:165] Writing key to /home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/profiles/auto-20210609012809-9941/apiserver.key.31bdca25 ...
	I0609 01:41:44.226783  329232 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/profiles/auto-20210609012809-9941/apiserver.key.31bdca25: {Name:mkfae0a3bd896dd88f44a8261ced590d5cf2eaf3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0609 01:41:44.226857  329232 certs.go:305] copying /home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/profiles/auto-20210609012809-9941/apiserver.crt.31bdca25 -> /home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/profiles/auto-20210609012809-9941/apiserver.crt
	I0609 01:41:44.226912  329232 certs.go:309] copying /home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/profiles/auto-20210609012809-9941/apiserver.key.31bdca25 -> /home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/profiles/auto-20210609012809-9941/apiserver.key
	I0609 01:41:44.226968  329232 certs.go:294] generating aggregator signed cert: /home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/profiles/auto-20210609012809-9941/proxy-client.key
	I0609 01:41:44.226982  329232 crypto.go:69] Generating cert /home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/profiles/auto-20210609012809-9941/proxy-client.crt with IP's: []
	I0609 01:41:44.493832  329232 crypto.go:157] Writing cert to /home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/profiles/auto-20210609012809-9941/proxy-client.crt ...
	I0609 01:41:44.493863  329232 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/profiles/auto-20210609012809-9941/proxy-client.crt: {Name:mkb1a9418c2d79591044d594bd7bb611a67d607c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0609 01:41:44.494045  329232 crypto.go:165] Writing key to /home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/profiles/auto-20210609012809-9941/proxy-client.key ...
	I0609 01:41:44.494060  329232 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/profiles/auto-20210609012809-9941/proxy-client.key: {Name:mkadb2ec9513a5b1c87d24f9a0d9353126c956ab Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0609 01:41:44.494231  329232 certs.go:369] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/certs/9941.pem (1338 bytes)
	W0609 01:41:44.494272  329232 certs.go:365] ignoring /home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/certs/9941_empty.pem, impossibly tiny 0 bytes
	I0609 01:41:44.494299  329232 certs.go:369] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/certs/ca-key.pem (1675 bytes)
	I0609 01:41:44.494326  329232 certs.go:369] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/certs/ca.pem (1082 bytes)
	I0609 01:41:44.494386  329232 certs.go:369] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/certs/cert.pem (1123 bytes)
	I0609 01:41:44.494417  329232 certs.go:369] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/certs/key.pem (1679 bytes)
	I0609 01:41:44.495301  329232 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/profiles/auto-20210609012809-9941/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0609 01:41:44.513759  329232 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/profiles/auto-20210609012809-9941/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0609 01:41:44.556375  329232 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/profiles/auto-20210609012809-9941/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0609 01:41:44.574638  329232 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/profiles/auto-20210609012809-9941/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0609 01:41:44.590891  329232 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0609 01:41:44.607761  329232 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0609 01:41:44.624984  329232 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0609 01:41:44.641979  329232 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0609 01:41:44.661420  329232 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/certs/9941.pem --> /usr/share/ca-certificates/9941.pem (1338 bytes)
	I0609 01:41:44.679420  329232 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0609 01:41:44.697286  329232 ssh_runner.go:316] scp memory --> /var/lib/minikube/kubeconfig (752 bytes)
	I0609 01:41:44.709772  329232 ssh_runner.go:149] Run: openssl version
	I0609 01:41:44.714441  329232 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/9941.pem && ln -fs /usr/share/ca-certificates/9941.pem /etc/ssl/certs/9941.pem"
	I0609 01:41:44.721420  329232 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/9941.pem
	I0609 01:41:44.724999  329232 certs.go:410] hashing: -rw-r--r-- 1 root root 1338 Jun  9 01:04 /usr/share/ca-certificates/9941.pem
	I0609 01:41:44.725051  329232 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/9941.pem
	I0609 01:41:44.730221  329232 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/9941.pem /etc/ssl/certs/51391683.0"
	I0609 01:41:44.738018  329232 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0609 01:41:44.744990  329232 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0609 01:41:44.747847  329232 certs.go:410] hashing: -rw-r--r-- 1 root root 1111 Jun  9 00:58 /usr/share/ca-certificates/minikubeCA.pem
	I0609 01:41:44.747885  329232 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0609 01:41:44.752327  329232 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0609 01:41:44.759007  329232 kubeadm.go:390] StartCluster: {Name:auto-20210609012809-9941 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.23@sha256:baf6d94b2050bcbecd98994e265cf965a4f4768978620ccf5227a6dcb75ade45 Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.7 ClusterName:auto-20210609012809-9941 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDoma
in:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.20.7 ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:5m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false}
	I0609 01:41:44.759106  329232 ssh_runner.go:149] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0609 01:41:44.801843  329232 ssh_runner.go:149] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0609 01:41:44.810329  329232 ssh_runner.go:149] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0609 01:41:44.818129  329232 kubeadm.go:220] ignoring SystemVerification for kubeadm because of docker driver
	I0609 01:41:44.818183  329232 ssh_runner.go:149] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0609 01:41:44.825259  329232 kubeadm.go:151] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0609 01:41:44.825307  329232 ssh_runner.go:240] Start: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.20.7:$PATH kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0609 01:41:43.875536  352096 pod_ready.go:102] pod "calico-kube-controllers-55ffdb7658-gltlk" in "kube-system" namespace has status "Ready":"False"
	I0609 01:41:46.376745  352096 pod_ready.go:102] pod "calico-kube-controllers-55ffdb7658-gltlk" in "kube-system" namespace has status "Ready":"False"
	I0609 01:41:45.588110  329232 out.go:197]   - Generating certificates and keys ...
	I0609 01:41:48.300953  300573 system_pods.go:86] 8 kube-system pods found
	I0609 01:41:48.300985  300573 system_pods.go:89] "coredns-fb8b8dccf-ctgrx" [adffdf7a-c8c3-11eb-a78f-02427f02d9a2] Running
	I0609 01:41:48.300993  300573 system_pods.go:89] "etcd-old-k8s-version-20210609012901-9941" [d1de6264-c8c3-11eb-a78f-02427f02d9a2] Running
	I0609 01:41:48.301000  300573 system_pods.go:89] "kube-apiserver-old-k8s-version-20210609012901-9941" [c8550af5-c8c3-11eb-a78f-02427f02d9a2] Running
	I0609 01:41:48.301006  300573 system_pods.go:89] "kube-controller-manager-old-k8s-version-20210609012901-9941" [c8ed9fe0-c8c3-11eb-a78f-02427f02d9a2] Running
	I0609 01:41:48.301013  300573 system_pods.go:89] "kube-proxy-97rr9" [ae0f0e5a-c8c3-11eb-a78f-02427f02d9a2] Running
	I0609 01:41:48.301020  300573 system_pods.go:89] "kube-scheduler-old-k8s-version-20210609012901-9941" [c768fa6f-c8c3-11eb-a78f-02427f02d9a2] Running
	I0609 01:41:48.301031  300573 system_pods.go:89] "metrics-server-8546d8b77b-lqx7b" [afea2287-c8c3-11eb-a78f-02427f02d9a2] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0609 01:41:48.301043  300573 system_pods.go:89] "storage-provisioner" [af32324c-c8c3-11eb-a78f-02427f02d9a2] Running
	I0609 01:41:48.301053  300573 system_pods.go:126] duration metric: took 56.76990207s to wait for k8s-apps to be running ...
	I0609 01:41:48.301068  300573 system_svc.go:44] waiting for kubelet service to be running ....
	I0609 01:41:48.301114  300573 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service kubelet
	I0609 01:41:48.310381  300573 system_svc.go:56] duration metric: took 9.307261ms WaitForService to wait for kubelet.
	I0609 01:41:48.310405  300573 kubeadm.go:547] duration metric: took 1m14.727322076s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0609 01:41:48.310424  300573 node_conditions.go:102] verifying NodePressure condition ...
	I0609 01:41:48.312372  300573 node_conditions.go:122] node storage ephemeral capacity is 309568300Ki
	I0609 01:41:48.312391  300573 node_conditions.go:123] node cpu capacity is 8
	I0609 01:41:48.312404  300573 node_conditions.go:105] duration metric: took 1.974952ms to run NodePressure ...
	I0609 01:41:48.312415  300573 start.go:219] waiting for startup goroutines ...
	I0609 01:41:48.356569  300573 start.go:463] kubectl: 1.20.5, cluster: 1.14.0 (minor skew: 6)
	I0609 01:41:48.358565  300573 out.go:170] 
	W0609 01:41:48.358730  300573 out.go:235] ! /usr/local/bin/kubectl is version 1.20.5, which may have incompatibilites with Kubernetes 1.14.0.
	I0609 01:41:48.360236  300573 out.go:170]   - Want kubectl v1.14.0? Try 'minikube kubectl -- get pods -A'
	I0609 01:41:48.361792  300573 out.go:170] * Done! kubectl is now configured to use "old-k8s-version-20210609012901-9941" cluster and "default" namespace by default
	I0609 01:41:46.954352  344705 pod_ready.go:102] pod "kube-controller-manager-cilium-20210609012810-9941" in "kube-system" namespace has status "Ready":"False"
	I0609 01:41:48.955130  344705 pod_ready.go:102] pod "kube-controller-manager-cilium-20210609012810-9941" in "kube-system" namespace has status "Ready":"False"
	I0609 01:41:47.875252  352096 pod_ready.go:92] pod "calico-kube-controllers-55ffdb7658-gltlk" in "kube-system" namespace has status "Ready":"True"
	I0609 01:41:47.875281  352096 pod_ready.go:81] duration metric: took 28.515609073s waiting for pod "calico-kube-controllers-55ffdb7658-gltlk" in "kube-system" namespace to be "Ready" ...
	I0609 01:41:47.875297  352096 pod_ready.go:78] waiting up to 5m0s for pod "calico-node-8bhjk" in "kube-system" namespace to be "Ready" ...
	I0609 01:41:49.886712  352096 pod_ready.go:92] pod "calico-node-8bhjk" in "kube-system" namespace has status "Ready":"True"
	I0609 01:41:49.886740  352096 pod_ready.go:81] duration metric: took 2.011435025s waiting for pod "calico-node-8bhjk" in "kube-system" namespace to be "Ready" ...
	I0609 01:41:49.886752  352096 pod_ready.go:78] waiting up to 5m0s for pod "coredns-74ff55c5b-kngs5" in "kube-system" namespace to be "Ready" ...
	I0609 01:41:47.864552  329232 out.go:197]   - Booting up control plane ...
	I0609 01:41:50.955197  344705 pod_ready.go:102] pod "kube-controller-manager-cilium-20210609012810-9941" in "kube-system" namespace has status "Ready":"False"
	I0609 01:41:53.456163  344705 pod_ready.go:102] pod "kube-controller-manager-cilium-20210609012810-9941" in "kube-system" namespace has status "Ready":"False"
	I0609 01:41:51.896789  352096 pod_ready.go:92] pod "coredns-74ff55c5b-kngs5" in "kube-system" namespace has status "Ready":"True"
	I0609 01:41:51.896811  352096 pod_ready.go:81] duration metric: took 2.010052283s waiting for pod "coredns-74ff55c5b-kngs5" in "kube-system" namespace to be "Ready" ...
	I0609 01:41:51.896821  352096 pod_ready.go:78] waiting up to 5m0s for pod "coredns-74ff55c5b-qc224" in "kube-system" namespace to be "Ready" ...
	I0609 01:41:51.898882  352096 pod_ready.go:97] error getting pod "coredns-74ff55c5b-qc224" in "kube-system" namespace (skipping!): pods "coredns-74ff55c5b-qc224" not found
	I0609 01:41:51.898909  352096 pod_ready.go:81] duration metric: took 2.080404ms waiting for pod "coredns-74ff55c5b-qc224" in "kube-system" namespace to be "Ready" ...
	E0609 01:41:51.898919  352096 pod_ready.go:66] WaitExtra: waitPodCondition: error getting pod "coredns-74ff55c5b-qc224" in "kube-system" namespace (skipping!): pods "coredns-74ff55c5b-qc224" not found
	I0609 01:41:51.898928  352096 pod_ready.go:78] waiting up to 5m0s for pod "etcd-calico-20210609012810-9941" in "kube-system" namespace to be "Ready" ...
	I0609 01:41:53.907845  352096 pod_ready.go:102] pod "etcd-calico-20210609012810-9941" in "kube-system" namespace has status "Ready":"False"
	I0609 01:41:55.911876  352096 pod_ready.go:102] pod "etcd-calico-20210609012810-9941" in "kube-system" namespace has status "Ready":"False"
	I0609 01:41:55.954929  344705 pod_ready.go:102] pod "kube-controller-manager-cilium-20210609012810-9941" in "kube-system" namespace has status "Ready":"False"
	I0609 01:41:57.955126  344705 pod_ready.go:102] pod "kube-controller-manager-cilium-20210609012810-9941" in "kube-system" namespace has status "Ready":"False"
	I0609 01:41:59.956675  344705 pod_ready.go:102] pod "kube-controller-manager-cilium-20210609012810-9941" in "kube-system" namespace has status "Ready":"False"
	I0609 01:41:58.408965  352096 pod_ready.go:102] pod "etcd-calico-20210609012810-9941" in "kube-system" namespace has status "Ready":"False"
	I0609 01:42:00.909845  352096 pod_ready.go:102] pod "etcd-calico-20210609012810-9941" in "kube-system" namespace has status "Ready":"False"
	I0609 01:41:57.536931  329232 out.go:197]   - Configuring RBAC rules ...
	I0609 01:41:57.950447  329232 cni.go:93] Creating CNI manager for ""
	I0609 01:41:57.950472  329232 cni.go:167] CNI unnecessary in this configuration, recommending no CNI
	I0609 01:41:57.950504  329232 ssh_runner.go:149] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0609 01:41:57.950565  329232 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.20.7/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0609 01:41:57.950588  329232 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.20.7/kubectl label nodes minikube.k8s.io/version=v1.21.0-beta.0 minikube.k8s.io/commit=d4601e4184ba947b7d077d7d426c2bca79bbf9fc minikube.k8s.io/name=auto-20210609012809-9941 minikube.k8s.io/updated_at=2021_06_09T01_41_57_0700 --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0609 01:41:58.270674  329232 ops.go:34] apiserver oom_adj: -16
	I0609 01:41:58.270873  329232 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.20.7/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0609 01:41:58.834789  329232 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.20.7/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0609 01:41:59.334848  329232 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.20.7/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0609 01:41:59.834836  329232 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.20.7/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0609 01:42:00.334592  329232 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.20.7/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0609 01:42:00.835312  329232 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.20.7/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0609 01:42:01.335240  329232 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.20.7/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0609 01:42:01.834799  329232 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.20.7/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0609 01:42:02.334849  329232 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.20.7/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	
	* 
	* ==> Docker <==
	* -- Logs begin at Wed 2021-06-09 01:34:39 UTC, end at Wed 2021-06-09 01:42:03 UTC. --
	Jun 09 01:40:02 old-k8s-version-20210609012901-9941 dockerd[203]: time="2021-06-09T01:40:02.779605017Z" level=info msg="ignoring event" container=cc0aca83efeca0d2b5a6380f0035838137a5ddede617bb12397795175054b95c module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 09 01:40:03 old-k8s-version-20210609012901-9941 dockerd[203]: time="2021-06-09T01:40:03.115851734Z" level=info msg="ignoring event" container=5e67ef29fd782e6882093cefc8d1b2e4e6502289a8aab7eb602baa78ff03d4df module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 09 01:40:03 old-k8s-version-20210609012901-9941 dockerd[203]: time="2021-06-09T01:40:03.244359054Z" level=info msg="ignoring event" container=647284240c9b3ff26c1e5d787021349e374f04b87d9f0c78f0972878ca393ce7 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 09 01:40:03 old-k8s-version-20210609012901-9941 dockerd[203]: time="2021-06-09T01:40:03.376184625Z" level=info msg="ignoring event" container=8a1abb294bc93b7aeb07164f4e6a549e477648e117418f2e94e2b62b742a603f module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 09 01:40:03 old-k8s-version-20210609012901-9941 dockerd[203]: time="2021-06-09T01:40:03.503253921Z" level=info msg="ignoring event" container=a8f1d2a6258c19eb81fe707363ba95a59689f2623e07e372b5f44056f81b71b6 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 09 01:40:03 old-k8s-version-20210609012901-9941 dockerd[203]: time="2021-06-09T01:40:03.655460364Z" level=info msg="ignoring event" container=0a42e38b95e96fac8c84fbd6415b07279c3f7b4dc175292ee03bf72f93504bff module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 09 01:40:03 old-k8s-version-20210609012901-9941 dockerd[203]: time="2021-06-09T01:40:03.868060101Z" level=info msg="ignoring event" container=8f37f3879958d7bcfb1fb37da48178584862829d0f9ab46e57d49320f37fc3f8 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 09 01:40:04 old-k8s-version-20210609012901-9941 dockerd[203]: time="2021-06-09T01:40:04.043079624Z" level=info msg="ignoring event" container=83d747333959a40a15d16276795b19088263280ab507d0e39ebf3009f9cd7290 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 09 01:40:04 old-k8s-version-20210609012901-9941 dockerd[203]: time="2021-06-09T01:40:04.194657529Z" level=info msg="ignoring event" container=76c2df28bafa15f4875a399fd3f8bde03a6e76c0e021ffe56eb96ee35045923f module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 09 01:40:36 old-k8s-version-20210609012901-9941 dockerd[203]: time="2021-06-09T01:40:36.611806519Z" level=warning msg="Your kernel does not support swap limit capabilities or the cgroup is not mounted. Memory limited without swap."
	Jun 09 01:40:37 old-k8s-version-20210609012901-9941 dockerd[203]: time="2021-06-09T01:40:37.093237111Z" level=warning msg="reference for unknown type: application/vnd.docker.distribution.manifest.v1+prettyjws" digest="sha256:5d99aa1120524c801bc8c1a7077e8f5ec122ba16b6dda1a5d3826057f67b9bcb" remote="k8s.gcr.io/echoserver:1.4"
	Jun 09 01:40:37 old-k8s-version-20210609012901-9941 dockerd[203]: time="2021-06-09T01:40:37.256429752Z" level=warning msg="Error persisting manifest" digest="sha256:5d99aa1120524c801bc8c1a7077e8f5ec122ba16b6dda1a5d3826057f67b9bcb" error="error committing manifest to content store: commit failed: unexpected commit digest sha256:eaee4c452b076cdb05b391ed7e75e1ad0aca136665875ab5d7e2f3d9f4675769, expected sha256:5d99aa1120524c801bc8c1a7077e8f5ec122ba16b6dda1a5d3826057f67b9bcb: failed precondition" remote="k8s.gcr.io/echoserver:1.4"
	Jun 09 01:40:41 old-k8s-version-20210609012901-9941 dockerd[203]: time="2021-06-09T01:40:41.432301024Z" level=warning msg="Error getting v2 registry: Get https://fake.domain/v2/: dial tcp: lookup fake.domain on 192.168.67.1:53: no such host"
	Jun 09 01:40:41 old-k8s-version-20210609012901-9941 dockerd[203]: time="2021-06-09T01:40:41.432343163Z" level=info msg="Attempting next endpoint for pull after error: Get https://fake.domain/v2/: dial tcp: lookup fake.domain on 192.168.67.1:53: no such host"
	Jun 09 01:40:41 old-k8s-version-20210609012901-9941 dockerd[203]: time="2021-06-09T01:40:41.433989922Z" level=error msg="Handler for POST /v1.38/images/create returned error: Get https://fake.domain/v2/: dial tcp: lookup fake.domain on 192.168.67.1:53: no such host"
	Jun 09 01:40:41 old-k8s-version-20210609012901-9941 dockerd[203]: time="2021-06-09T01:40:41.749379613Z" level=info msg="ignoring event" container=209b2f1f12c840e229b4ae712cd7def2451c3e705cd6cf899ed05d4cae0c0929 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 09 01:40:43 old-k8s-version-20210609012901-9941 dockerd[203]: time="2021-06-09T01:40:43.034860759Z" level=info msg="ignoring event" container=e15298565a01a44ba2e81fbb337da50279e879415a5091222be3a5e36aee08d7 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 09 01:40:57 old-k8s-version-20210609012901-9941 dockerd[203]: time="2021-06-09T01:40:57.032186534Z" level=warning msg="Error getting v2 registry: Get https://fake.domain/v2/: dial tcp: lookup fake.domain on 192.168.67.1:53: no such host"
	Jun 09 01:40:57 old-k8s-version-20210609012901-9941 dockerd[203]: time="2021-06-09T01:40:57.032222718Z" level=info msg="Attempting next endpoint for pull after error: Get https://fake.domain/v2/: dial tcp: lookup fake.domain on 192.168.67.1:53: no such host"
	Jun 09 01:40:57 old-k8s-version-20210609012901-9941 dockerd[203]: time="2021-06-09T01:40:57.041807409Z" level=error msg="Handler for POST /v1.38/images/create returned error: Get https://fake.domain/v2/: dial tcp: lookup fake.domain on 192.168.67.1:53: no such host"
	Jun 09 01:41:01 old-k8s-version-20210609012901-9941 dockerd[203]: time="2021-06-09T01:41:01.346826619Z" level=info msg="ignoring event" container=417a2459ca5d2c0a4e1befd352a48e44dc91fb4015fe574d929d8c1097ce09cd module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 09 01:41:27 old-k8s-version-20210609012901-9941 dockerd[203]: time="2021-06-09T01:41:27.038495294Z" level=warning msg="Error getting v2 registry: Get https://fake.domain/v2/: dial tcp: lookup fake.domain on 192.168.67.1:53: no such host"
	Jun 09 01:41:27 old-k8s-version-20210609012901-9941 dockerd[203]: time="2021-06-09T01:41:27.038537670Z" level=info msg="Attempting next endpoint for pull after error: Get https://fake.domain/v2/: dial tcp: lookup fake.domain on 192.168.67.1:53: no such host"
	Jun 09 01:41:27 old-k8s-version-20210609012901-9941 dockerd[203]: time="2021-06-09T01:41:27.040714461Z" level=error msg="Handler for POST /v1.38/images/create returned error: Get https://fake.domain/v2/: dial tcp: lookup fake.domain on 192.168.67.1:53: no such host"
	Jun 09 01:41:34 old-k8s-version-20210609012901-9941 dockerd[203]: time="2021-06-09T01:41:34.345802355Z" level=info msg="ignoring event" container=0a878f155b99161e7c0c238df1d2ea55fb150f549896a43282d60c2825d2e0ca module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE               CREATED              STATE               NAME                        ATTEMPT             POD ID
	0a878f155b991       a90209bb39e3d       29 seconds ago       Exited              dashboard-metrics-scraper   3                   7b28bd8313edd
	9230420d066a0       9a07b5b4bfac0       About a minute ago   Running             kubernetes-dashboard        0                   52cb0877bbe76
	80656451acc2e       eb516548c180f       About a minute ago   Running             coredns                     0                   b82c08bb91986
	d27ec4783cae5       6e38f40d628db       About a minute ago   Running             storage-provisioner         0                   3c840dfa16845
	ef3565ebed501       5cd54e388abaf       About a minute ago   Running             kube-proxy                  0                   facebb8dc382e
	15294a1b99e50       00638a24688b0       About a minute ago   Running             kube-scheduler              0                   9113a9c371341
	76559266dc96c       b95b1efa0436b       About a minute ago   Running             kube-controller-manager     0                   5c8b321c5839a
	557ff658123d4       2c4adeb21b4ff       About a minute ago   Running             etcd                        0                   4d98c28eb4819
	7435c96f89723       ecf910f40d6e0       About a minute ago   Running             kube-apiserver              0                   553d498b0da82
	
	* 
	* ==> coredns [80656451acc2] <==
	* .:53
	2021-06-09T01:40:37.071Z [INFO] CoreDNS-1.3.1
	2021-06-09T01:40:37.071Z [INFO] linux/amd64, go1.11.4, 6b56a9c
	CoreDNS-1.3.1
	linux/amd64, go1.11.4, 6b56a9c
	2021-06-09T01:40:37.071Z [INFO] plugin/reload: Running configuration MD5 = d7336ec3b7f1205cfa0fef85b62c291b
	
	* 
	* ==> describe nodes <==
	* Name:               old-k8s-version-20210609012901-9941
	Roles:              master
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=old-k8s-version-20210609012901-9941
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=d4601e4184ba947b7d077d7d426c2bca79bbf9fc
	                    minikube.k8s.io/name=old-k8s-version-20210609012901-9941
	                    minikube.k8s.io/updated_at=2021_06_09T01_40_17_0700
	                    minikube.k8s.io/version=v1.21.0-beta.0
	                    node-role.kubernetes.io/master=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /var/run/dockershim.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 09 Jun 2021 01:40:13 +0000
	Taints:             <none>
	Unschedulable:      false
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 09 Jun 2021 01:41:13 +0000   Wed, 09 Jun 2021 01:40:08 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 09 Jun 2021 01:41:13 +0000   Wed, 09 Jun 2021 01:40:08 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 09 Jun 2021 01:41:13 +0000   Wed, 09 Jun 2021 01:40:08 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 09 Jun 2021 01:41:13 +0000   Wed, 09 Jun 2021 01:40:08 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.67.2
	  Hostname:    old-k8s-version-20210609012901-9941
	Capacity:
	 cpu:                8
	 ephemeral-storage:  309568300Ki
	 hugepages-1Gi:      0
	 hugepages-2Mi:      0
	 memory:             32951376Ki
	 pods:               110
	Allocatable:
	 cpu:                8
	 ephemeral-storage:  309568300Ki
	 hugepages-1Gi:      0
	 hugepages-2Mi:      0
	 memory:             32951376Ki
	 pods:               110
	System Info:
	 Machine ID:                 b77ec962e3734760b1e756ffc5e83152
	 System UUID:                fcb82c90-e30d-41cf-83d7-0b244092491c
	 Boot ID:                    e08f76ce-1642-432a-8e61-95aaa19183a7
	 Kernel Version:             4.9.0-15-amd64
	 OS Image:                   Ubuntu 20.04.2 LTS
	 Operating System:           linux
	 Architecture:               amd64
	 Container Runtime Version:  docker://20.10.7
	 Kubelet Version:            v1.14.0
	 Kube-Proxy Version:         v1.14.0
	PodCIDR:                     10.244.0.0/24
	Non-terminated Pods:         (10 in total)
	  Namespace                  Name                                                           CPU Requests  CPU Limits  Memory Requests  Memory Limits  AGE
	  ---------                  ----                                                           ------------  ----------  ---------------  -------------  ---
	  kube-system                coredns-fb8b8dccf-ctgrx                                        100m (1%!)(MISSING)     0 (0%!)(MISSING)      70Mi (0%!)(MISSING)        170Mi (0%!)(MISSING)     91s
	  kube-system                etcd-old-k8s-version-20210609012901-9941                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         30s
	  kube-system                kube-apiserver-old-k8s-version-20210609012901-9941             250m (3%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         46s
	  kube-system                kube-controller-manager-old-k8s-version-20210609012901-9941    200m (2%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         45s
	  kube-system                kube-proxy-97rr9                                               0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         91s
	  kube-system                kube-scheduler-old-k8s-version-20210609012901-9941             100m (1%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         48s
	  kube-system                metrics-server-8546d8b77b-lqx7b                                100m (1%!)(MISSING)     0 (0%!)(MISSING)      300Mi (0%!)(MISSING)       0 (0%!)(MISSING)         87s
	  kube-system                storage-provisioner                                            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         89s
	  kubernetes-dashboard       dashboard-metrics-scraper-5b494cc544-529qb                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         88s
	  kubernetes-dashboard       kubernetes-dashboard-5d8978d65d-5c7t7                          0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         88s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (9%!)(MISSING)   0 (0%!)(MISSING)
	  memory             370Mi (1%!)(MISSING)  170Mi (0%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                  From                                             Message
	  ----    ------                   ----                 ----                                             -------
	  Normal  Starting                 116s                 kubelet, old-k8s-version-20210609012901-9941     Starting kubelet.
	  Normal  NodeHasSufficientMemory  116s (x8 over 116s)  kubelet, old-k8s-version-20210609012901-9941     Node old-k8s-version-20210609012901-9941 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    116s (x8 over 116s)  kubelet, old-k8s-version-20210609012901-9941     Node old-k8s-version-20210609012901-9941 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     116s (x7 over 116s)  kubelet, old-k8s-version-20210609012901-9941     Node old-k8s-version-20210609012901-9941 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  116s                 kubelet, old-k8s-version-20210609012901-9941     Updated Node Allocatable limit across pods
	  Normal  Starting                 88s                  kube-proxy, old-k8s-version-20210609012901-9941  Starting kube-proxy.
	
	* 
	* ==> dmesg <==
	* [  +1.658653] IPv4: martian source 10.85.0.4 from 10.85.0.4, on dev eth0
	[  +0.000002] ll header: 00000000: ff ff ff ff ff ff ce 5c c6 1f 63 8a 08 06        .......\..c...
	[  +0.004022] IPv4: martian source 10.85.0.5 from 10.85.0.5, on dev eth0
	[  +0.000001] ll header: 00000000: ff ff ff ff ff ff 0e 5d 4b c1 e0 ed 08 06        .......]K.....
	[  +2.140856] IPv4: martian source 10.85.0.6 from 10.85.0.6, on dev eth0
	[  +0.000002] ll header: 00000000: ff ff ff ff ff ff 3e a3 2b db cb b6 08 06        ......>.+.....
	[  +0.147751] IPv4: martian source 10.85.0.7 from 10.85.0.7, on dev eth0
	[  +0.000002] ll header: 00000000: ff ff ff ff ff ff 9a f2 40 59 da 87 08 06        ........@Y....
	[  +2.083360] IPv4: martian source 10.85.0.8 from 10.85.0.8, on dev eth0
	[  +0.000001] ll header: 00000000: ff ff ff ff ff ff 56 9d 71 18 33 dd 08 06        ......V.q.3...
	[  +0.000616] IPv4: martian source 10.85.0.9 from 10.85.0.9, on dev eth0
	[  +0.000002] ll header: 00000000: ff ff ff ff ff ff 8e 8d b3 62 b0 07 08 06        .........b....
	[  +1.714381] IPv4: martian source 10.85.0.10 from 10.85.0.10, on dev eth0
	[  +0.000002] ll header: 00000000: ff ff ff ff ff ff 9e d1 b5 da bf 05 08 06        ..............
	[  +0.003822] IPv4: martian source 10.85.0.11 from 10.85.0.11, on dev eth0
	[  +0.000002] ll header: 00000000: ff ff ff ff ff ff 92 3a 5c 13 9f 7c 08 06        .......:\..|..
	[  +0.920701] IPv4: martian source 10.85.0.12 from 10.85.0.12, on dev eth0
	[  +0.000003] ll header: 00000000: ff ff ff ff ff ff d2 50 1c d3 1f 17 08 06        .......P......
	[  +0.002962] IPv4: martian source 10.85.0.13 from 10.85.0.13, on dev eth0
	[  +0.000002] ll header: 00000000: ff ff ff ff ff ff 86 09 69 5a 94 d2 08 06        ........iZ....
	[  +0.999987] IPv4: martian source 10.85.0.14 from 10.85.0.14, on dev eth0
	[  +0.000002] ll header: 00000000: ff ff ff ff ff ff fa 88 03 51 34 f3 08 06        .........Q4...
	[  +0.004235] IPv4: martian source 10.85.0.15 from 10.85.0.15, on dev eth0
	[  +0.000002] ll header: 00000000: ff ff ff ff ff ff f6 25 39 34 91 f2 08 06        .......%!.(MISSING)..
	[  +6.380947] cgroup: cgroup2: unknown option "nsdelegate"
	
	* 
	* ==> etcd [557ff658123d] <==
	* 2021-06-09 01:40:48.647414 W | wal: sync duration of 1.103904697s, expected less than 1s
	2021-06-09 01:40:48.753091 W | etcdserver: request "header:<ID:2289933000483394557 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/replicasets/kube-system/coredns-fb8b8dccf\" mod_revision:364 > success:<request_put:<key:\"/registry/replicasets/kube-system/coredns-fb8b8dccf\" value_size:1214 >> failure:<request_range:<key:\"/registry/replicasets/kube-system/coredns-fb8b8dccf\" > >>" with result "size:16" took too long (105.414042ms) to execute
	2021-06-09 01:40:48.753496 W | etcdserver: read-only range request "key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" " with result "range_response_count:1 size:765" took too long (250.229741ms) to execute
	2021-06-09 01:40:48.753722 W | etcdserver: read-only range request "key:\"/registry/pods/kube-system/coredns-fb8b8dccf-ctgrx\" " with result "range_response_count:1 size:1770" took too long (891.632545ms) to execute
	2021-06-09 01:40:50.467937 W | etcdserver: request "header:<ID:2289933000483394562 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" mod_revision:537 > success:<request_put:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" value_size:677 >> failure:<request_range:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" > >>" with result "size:16" took too long (1.08693209s) to execute
	2021-06-09 01:40:50.468037 W | etcdserver: read-only range request "key:\"/registry/health\" " with result "range_response_count:0 size:5" took too long (1.566131533s) to execute
	2021-06-09 01:40:50.468071 W | etcdserver: read-only range request "key:\"/registry/minions/old-k8s-version-20210609012901-9941\" " with result "range_response_count:1 size:3347" took too long (1.710868913s) to execute
	2021-06-09 01:40:50.468206 W | etcdserver: read-only range request "key:\"/registry/events/kubernetes-dashboard/dashboard-metrics-scraper-5b494cc544-529qb.1686c662e29f9611\" " with result "range_response_count:1 size:597" took too long (928.182072ms) to execute
	2021-06-09 01:40:51.483862 W | etcdserver: read-only range request "key:\"/registry/pods/kube-system/kube-proxy-97rr9\" " with result "range_response_count:1 size:2147" took too long (1.013095215s) to execute
	2021-06-09 01:41:12.976673 W | wal: sync duration of 1.117225227s, expected less than 1s
	2021-06-09 01:41:13.114230 W | etcdserver: read-only range request "key:\"/registry/minions/\" range_end:\"/registry/minions0\" " with result "range_response_count:1 size:3347" took too long (314.968585ms) to execute
	2021-06-09 01:41:13.114284 W | etcdserver: read-only range request "key:\"/registry/events/kube-system/metrics-server-8546d8b77b-lqx7b.1686c6626a0d515c\" " with result "range_response_count:1 size:550" took too long (1.100437486s) to execute
	2021-06-09 01:41:13.114371 W | etcdserver: read-only range request "key:\"/registry/pods/kube-system/\" range_end:\"/registry/pods/kube-system0\" " with result "range_response_count:4 size:7785" took too long (687.507808ms) to execute
	2021-06-09 01:41:13.114518 W | etcdserver: read-only range request "key:\"/registry/pods/kube-system/metrics-server-8546d8b77b-lqx7b\" " with result "range_response_count:1 size:1851" took too long (1.101558003s) to execute
	2021-06-09 01:41:13.114553 W | etcdserver: read-only range request "key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" " with result "range_response_count:1 size:765" took too long (579.387664ms) to execute
	2021-06-09 01:41:13.722674 W | etcdserver: read-only range request "key:\"/registry/events/kube-system/metrics-server-8546d8b77b-lqx7b.1686c6626a0d9249\" " with result "range_response_count:1 size:511" took too long (603.050028ms) to execute
	2021-06-09 01:41:13.722784 W | etcdserver: read-only range request "key:\"/registry/leases/kube-node-lease/old-k8s-version-20210609012901-9941\" " with result "range_response_count:1 size:395" took too long (601.855298ms) to execute
	2021-06-09 01:41:13.723059 W | etcdserver: read-only range request "key:\"/registry/namespaces/kube-node-lease\" " with result "range_response_count:1 size:187" took too long (573.108462ms) to execute
	2021-06-09 01:41:15.464247 W | etcdserver: read-only range request "key:\"/registry/limitranges/kube-system/\" range_end:\"/registry/limitranges/kube-system0\" " with result "range_response_count:0 size:5" took too long (1.450534843s) to execute
	2021-06-09 01:41:15.464304 W | etcdserver: read-only range request "key:\"/registry/rolebindings\" range_end:\"/registry/rolebindingt\" count_only:true " with result "range_response_count:0 size:7" took too long (166.55648ms) to execute
	2021-06-09 01:41:15.464595 W | etcdserver: read-only range request "key:\"/registry/ingress\" range_end:\"/registry/ingrest\" count_only:true " with result "range_response_count:0 size:5" took too long (144.856126ms) to execute
	2021-06-09 01:41:15.465036 W | etcdserver: read-only range request "key:\"/registry/clusterrolebindings\" range_end:\"/registry/clusterrolebindingt\" count_only:true " with result "range_response_count:0 size:7" took too long (1.527858302s) to execute
	2021-06-09 01:41:15.465734 W | etcdserver: read-only range request "key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" " with result "range_response_count:1 size:765" took too long (313.803884ms) to execute
	2021-06-09 01:41:37.088502 W | etcdserver: read-only range request "key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" " with result "range_response_count:1 size:765" took too long (579.483729ms) to execute
	2021-06-09 01:41:57.525183 W | etcdserver: read-only range request "key:\"/registry/jobs/\" range_end:\"/registry/jobs0\" limit:500 " with result "range_response_count:0 size:5" took too long (146.394885ms) to execute
	
	* 
	* ==> kernel <==
	*  01:42:03 up  1:24,  0 users,  load average: 4.91, 3.39, 2.63
	Linux old-k8s-version-20210609012901-9941 4.9.0-15-amd64 #1 SMP Debian 4.9.258-1 (2021-03-08) x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 20.04.2 LTS"
	
	* 
	* ==> kube-apiserver [7435c96f8972] <==
	* I0609 01:41:51.475583       1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000002
	I0609 01:41:52.475740       1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000001
	I0609 01:41:52.475870       1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000002
	I0609 01:41:53.476020       1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000001
	I0609 01:41:53.476131       1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000002
	I0609 01:41:54.476295       1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000001
	I0609 01:41:54.476431       1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000002
	I0609 01:41:55.476606       1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000001
	I0609 01:41:55.476735       1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000002
	I0609 01:41:56.476937       1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000001
	I0609 01:41:56.477102       1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000002
	I0609 01:41:57.477291       1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000001
	I0609 01:41:57.477429       1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000002
	I0609 01:41:58.477563       1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000001
	I0609 01:41:58.477715       1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000002
	I0609 01:41:59.477874       1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000001
	I0609 01:41:59.478011       1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000002
	I0609 01:42:00.478169       1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000001
	I0609 01:42:00.478301       1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000002
	I0609 01:42:01.478453       1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000001
	I0609 01:42:01.478583       1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000002
	I0609 01:42:02.478748       1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000001
	I0609 01:42:02.478888       1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000002
	I0609 01:42:03.479048       1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000001
	I0609 01:42:03.479199       1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000002
	
	* 
	* ==> kube-controller-manager [76559266dc96] <==
	* I0609 01:40:35.350957       1 event.go:209] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kubernetes-dashboard", Name:"dashboard-metrics-scraper-5b494cc544", UID:"af75069a-c8c3-11eb-a78f-02427f02d9a2", APIVersion:"apps/v1", ResourceVersion:"427", FieldPath:""}): type: 'Warning' reason: 'FailedCreate' Error creating: pods "dashboard-metrics-scraper-5b494cc544-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	E0609 01:40:35.355715       1 replica_set.go:450] Sync "kubernetes-dashboard/dashboard-metrics-scraper-5b494cc544" failed with pods "dashboard-metrics-scraper-5b494cc544-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0609 01:40:35.359115       1 event.go:209] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"kubernetes-dashboard", Name:"kubernetes-dashboard", UID:"af7ffe92-c8c3-11eb-a78f-02427f02d9a2", APIVersion:"apps/v1", ResourceVersion:"431", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set kubernetes-dashboard-5d8978d65d to 1
	E0609 01:40:35.361941       1 replica_set.go:450] Sync "kubernetes-dashboard/dashboard-metrics-scraper-5b494cc544" failed with pods "dashboard-metrics-scraper-5b494cc544-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0609 01:40:35.362185       1 event.go:209] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kubernetes-dashboard", Name:"dashboard-metrics-scraper-5b494cc544", UID:"af75069a-c8c3-11eb-a78f-02427f02d9a2", APIVersion:"apps/v1", ResourceVersion:"432", FieldPath:""}): type: 'Warning' reason: 'FailedCreate' Error creating: pods "dashboard-metrics-scraper-5b494cc544-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0609 01:40:35.363976       1 event.go:209] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kubernetes-dashboard", Name:"kubernetes-dashboard-5d8978d65d", UID:"af80a0bd-c8c3-11eb-a78f-02427f02d9a2", APIVersion:"apps/v1", ResourceVersion:"435", FieldPath:""}): type: 'Warning' reason: 'FailedCreate' Error creating: pods "kubernetes-dashboard-5d8978d65d-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0609 01:40:35.365457       1 event.go:209] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kubernetes-dashboard", Name:"dashboard-metrics-scraper-5b494cc544", UID:"af75069a-c8c3-11eb-a78f-02427f02d9a2", APIVersion:"apps/v1", ResourceVersion:"432", FieldPath:""}): type: 'Warning' reason: 'FailedCreate' Error creating: pods "dashboard-metrics-scraper-5b494cc544-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	E0609 01:40:35.365465       1 replica_set.go:450] Sync "kubernetes-dashboard/dashboard-metrics-scraper-5b494cc544" failed with pods "dashboard-metrics-scraper-5b494cc544-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	E0609 01:40:35.367928       1 replica_set.go:450] Sync "kubernetes-dashboard/kubernetes-dashboard-5d8978d65d" failed with pods "kubernetes-dashboard-5d8978d65d-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	E0609 01:40:35.372059       1 replica_set.go:450] Sync "kubernetes-dashboard/kubernetes-dashboard-5d8978d65d" failed with pods "kubernetes-dashboard-5d8978d65d-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0609 01:40:35.372481       1 event.go:209] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kubernetes-dashboard", Name:"kubernetes-dashboard-5d8978d65d", UID:"af80a0bd-c8c3-11eb-a78f-02427f02d9a2", APIVersion:"apps/v1", ResourceVersion:"440", FieldPath:""}): type: 'Warning' reason: 'FailedCreate' Error creating: pods "kubernetes-dashboard-5d8978d65d-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	E0609 01:40:35.441817       1 replica_set.go:450] Sync "kubernetes-dashboard/kubernetes-dashboard-5d8978d65d" failed with pods "kubernetes-dashboard-5d8978d65d-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	E0609 01:40:35.441964       1 replica_set.go:450] Sync "kubernetes-dashboard/dashboard-metrics-scraper-5b494cc544" failed with pods "dashboard-metrics-scraper-5b494cc544-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0609 01:40:35.442412       1 event.go:209] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kubernetes-dashboard", Name:"kubernetes-dashboard-5d8978d65d", UID:"af80a0bd-c8c3-11eb-a78f-02427f02d9a2", APIVersion:"apps/v1", ResourceVersion:"440", FieldPath:""}): type: 'Warning' reason: 'FailedCreate' Error creating: pods "kubernetes-dashboard-5d8978d65d-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0609 01:40:35.442440       1 event.go:209] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kubernetes-dashboard", Name:"dashboard-metrics-scraper-5b494cc544", UID:"af75069a-c8c3-11eb-a78f-02427f02d9a2", APIVersion:"apps/v1", ResourceVersion:"432", FieldPath:""}): type: 'Warning' reason: 'FailedCreate' Error creating: pods "dashboard-metrics-scraper-5b494cc544-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	E0609 01:40:35.464444       1 replica_set.go:450] Sync "kubernetes-dashboard/kubernetes-dashboard-5d8978d65d" failed with pods "kubernetes-dashboard-5d8978d65d-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0609 01:40:35.464486       1 event.go:209] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kubernetes-dashboard", Name:"kubernetes-dashboard-5d8978d65d", UID:"af80a0bd-c8c3-11eb-a78f-02427f02d9a2", APIVersion:"apps/v1", ResourceVersion:"440", FieldPath:""}): type: 'Warning' reason: 'FailedCreate' Error creating: pods "kubernetes-dashboard-5d8978d65d-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0609 01:40:35.546527       1 event.go:209] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kubernetes-dashboard", Name:"dashboard-metrics-scraper-5b494cc544", UID:"af75069a-c8c3-11eb-a78f-02427f02d9a2", APIVersion:"apps/v1", ResourceVersion:"432", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: dashboard-metrics-scraper-5b494cc544-529qb
	I0609 01:40:35.546799       1 event.go:209] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kubernetes-dashboard", Name:"kubernetes-dashboard-5d8978d65d", UID:"af80a0bd-c8c3-11eb-a78f-02427f02d9a2", APIVersion:"apps/v1", ResourceVersion:"440", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: kubernetes-dashboard-5d8978d65d-5c7t7
	I0609 01:40:36.049812       1 event.go:209] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kube-system", Name:"metrics-server-8546d8b77b", UID:"af420efe-c8c3-11eb-a78f-02427f02d9a2", APIVersion:"apps/v1", ResourceVersion:"402", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: metrics-server-8546d8b77b-lqx7b
	E0609 01:41:02.997582       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0609 01:41:05.550860       1 garbagecollector.go:644] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0609 01:41:33.249304       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0609 01:41:37.552663       1 garbagecollector.go:644] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0609 01:42:03.500854       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	
	* 
	* ==> kube-proxy [ef3565ebed50] <==
	* W0609 01:40:33.954499       1 server_others.go:295] Flag proxy-mode="" unknown, assuming iptables proxy
	I0609 01:40:33.964131       1 server_others.go:148] Using iptables Proxier.
	I0609 01:40:33.964802       1 server_others.go:178] Tearing down inactive rules.
	E0609 01:40:34.154995       1 proxier.go:583] Error removing iptables rules in ipvs proxier: error deleting chain "KUBE-MARK-MASQ": exit status 1: iptables: Too many links.
	I0609 01:40:35.290112       1 server.go:555] Version: v1.14.0
	I0609 01:40:35.341044       1 config.go:202] Starting service config controller
	I0609 01:40:35.341164       1 controller_utils.go:1027] Waiting for caches to sync for service config controller
	I0609 01:40:35.341748       1 config.go:102] Starting endpoints config controller
	I0609 01:40:35.343249       1 controller_utils.go:1027] Waiting for caches to sync for endpoints config controller
	I0609 01:40:35.441725       1 controller_utils.go:1034] Caches are synced for service config controller
	I0609 01:40:35.443748       1 controller_utils.go:1034] Caches are synced for endpoints config controller
	
	* 
	* ==> kube-scheduler [15294a1b99e5] <==
	* W0609 01:40:10.688361       1 authentication.go:55] Authentication is disabled
	I0609 01:40:10.688374       1 deprecated_insecure_serving.go:49] Serving healthz insecurely on [::]:10251
	I0609 01:40:10.688743       1 secure_serving.go:116] Serving securely on 127.0.0.1:10259
	E0609 01:40:12.981814       1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0609 01:40:12.981916       1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0609 01:40:12.982827       1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0609 01:40:13.050964       1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0609 01:40:13.062003       1 reflector.go:126] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:223: Failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0609 01:40:13.062138       1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0609 01:40:13.062510       1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0609 01:40:13.062930       1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0609 01:40:13.064487       1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0609 01:40:13.065331       1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0609 01:40:13.982943       1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0609 01:40:13.984017       1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0609 01:40:13.985045       1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0609 01:40:14.052710       1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0609 01:40:14.063171       1 reflector.go:126] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:223: Failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0609 01:40:14.063859       1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0609 01:40:14.065063       1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0609 01:40:14.066262       1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0609 01:40:14.067278       1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0609 01:40:14.068396       1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	I0609 01:40:15.890053       1 controller_utils.go:1027] Waiting for caches to sync for scheduler controller
	I0609 01:40:15.990228       1 controller_utils.go:1034] Caches are synced for scheduler controller
	
	* 
	* ==> kubelet <==
	* -- Logs begin at Wed 2021-06-09 01:34:39 UTC, end at Wed 2021-06-09 01:42:03 UTC. --
	Jun 09 01:40:41 old-k8s-version-20210609012901-9941 kubelet[6233]: E0609 01:40:41.434392    6233 remote_image.go:113] PullImage "fake.domain/k8s.gcr.io/echoserver:1.4" from image service failed: rpc error: code = Unknown desc = Error response from daemon: Get https://fake.domain/v2/: dial tcp: lookup fake.domain on 192.168.67.1:53: no such host
	Jun 09 01:40:41 old-k8s-version-20210609012901-9941 kubelet[6233]: E0609 01:40:41.434450    6233 kuberuntime_image.go:51] Pull image "fake.domain/k8s.gcr.io/echoserver:1.4" failed: rpc error: code = Unknown desc = Error response from daemon: Get https://fake.domain/v2/: dial tcp: lookup fake.domain on 192.168.67.1:53: no such host
	Jun 09 01:40:41 old-k8s-version-20210609012901-9941 kubelet[6233]: E0609 01:40:41.434528    6233 kuberuntime_manager.go:780] container start failed: ErrImagePull: rpc error: code = Unknown desc = Error response from daemon: Get https://fake.domain/v2/: dial tcp: lookup fake.domain on 192.168.67.1:53: no such host
	Jun 09 01:40:41 old-k8s-version-20210609012901-9941 kubelet[6233]: E0609 01:40:41.434593    6233 pod_workers.go:190] Error syncing pod afea2287-c8c3-11eb-a78f-02427f02d9a2 ("metrics-server-8546d8b77b-lqx7b_kube-system(afea2287-c8c3-11eb-a78f-02427f02d9a2)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = Error response from daemon: Get https://fake.domain/v2/: dial tcp: lookup fake.domain on 192.168.67.1:53: no such host"
	Jun 09 01:40:41 old-k8s-version-20210609012901-9941 kubelet[6233]: E0609 01:40:41.702071    6233 pod_workers.go:190] Error syncing pod afea2287-c8c3-11eb-a78f-02427f02d9a2 ("metrics-server-8546d8b77b-lqx7b_kube-system(afea2287-c8c3-11eb-a78f-02427f02d9a2)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/k8s.gcr.io/echoserver:1.4\""
	Jun 09 01:40:43 old-k8s-version-20210609012901-9941 kubelet[6233]: E0609 01:40:43.724887    6233 pod_workers.go:190] Error syncing pod af9cb92a-c8c3-11eb-a78f-02427f02d9a2 ("dashboard-metrics-scraper-5b494cc544-529qb_kubernetes-dashboard(af9cb92a-c8c3-11eb-a78f-02427f02d9a2)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "Back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5b494cc544-529qb_kubernetes-dashboard(af9cb92a-c8c3-11eb-a78f-02427f02d9a2)"
	Jun 09 01:40:44 old-k8s-version-20210609012901-9941 kubelet[6233]: E0609 01:40:44.734847    6233 pod_workers.go:190] Error syncing pod af9cb92a-c8c3-11eb-a78f-02427f02d9a2 ("dashboard-metrics-scraper-5b494cc544-529qb_kubernetes-dashboard(af9cb92a-c8c3-11eb-a78f-02427f02d9a2)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "Back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5b494cc544-529qb_kubernetes-dashboard(af9cb92a-c8c3-11eb-a78f-02427f02d9a2)"
	Jun 09 01:40:49 old-k8s-version-20210609012901-9941 kubelet[6233]: E0609 01:40:49.538510    6233 pod_workers.go:190] Error syncing pod af9cb92a-c8c3-11eb-a78f-02427f02d9a2 ("dashboard-metrics-scraper-5b494cc544-529qb_kubernetes-dashboard(af9cb92a-c8c3-11eb-a78f-02427f02d9a2)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "Back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5b494cc544-529qb_kubernetes-dashboard(af9cb92a-c8c3-11eb-a78f-02427f02d9a2)"
	Jun 09 01:40:57 old-k8s-version-20210609012901-9941 kubelet[6233]: E0609 01:40:57.042394    6233 remote_image.go:113] PullImage "fake.domain/k8s.gcr.io/echoserver:1.4" from image service failed: rpc error: code = Unknown desc = Error response from daemon: Get https://fake.domain/v2/: dial tcp: lookup fake.domain on 192.168.67.1:53: no such host
	Jun 09 01:40:57 old-k8s-version-20210609012901-9941 kubelet[6233]: E0609 01:40:57.042449    6233 kuberuntime_image.go:51] Pull image "fake.domain/k8s.gcr.io/echoserver:1.4" failed: rpc error: code = Unknown desc = Error response from daemon: Get https://fake.domain/v2/: dial tcp: lookup fake.domain on 192.168.67.1:53: no such host
	Jun 09 01:40:57 old-k8s-version-20210609012901-9941 kubelet[6233]: E0609 01:40:57.042530    6233 kuberuntime_manager.go:780] container start failed: ErrImagePull: rpc error: code = Unknown desc = Error response from daemon: Get https://fake.domain/v2/: dial tcp: lookup fake.domain on 192.168.67.1:53: no such host
	Jun 09 01:40:57 old-k8s-version-20210609012901-9941 kubelet[6233]: E0609 01:40:57.042566    6233 pod_workers.go:190] Error syncing pod afea2287-c8c3-11eb-a78f-02427f02d9a2 ("metrics-server-8546d8b77b-lqx7b_kube-system(afea2287-c8c3-11eb-a78f-02427f02d9a2)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = Error response from daemon: Get https://fake.domain/v2/: dial tcp: lookup fake.domain on 192.168.67.1:53: no such host"
	Jun 09 01:41:01 old-k8s-version-20210609012901-9941 kubelet[6233]: E0609 01:41:01.836699    6233 pod_workers.go:190] Error syncing pod af9cb92a-c8c3-11eb-a78f-02427f02d9a2 ("dashboard-metrics-scraper-5b494cc544-529qb_kubernetes-dashboard(af9cb92a-c8c3-11eb-a78f-02427f02d9a2)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "Back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5b494cc544-529qb_kubernetes-dashboard(af9cb92a-c8c3-11eb-a78f-02427f02d9a2)"
	Jun 09 01:41:09 old-k8s-version-20210609012901-9941 kubelet[6233]: E0609 01:41:09.538606    6233 pod_workers.go:190] Error syncing pod af9cb92a-c8c3-11eb-a78f-02427f02d9a2 ("dashboard-metrics-scraper-5b494cc544-529qb_kubernetes-dashboard(af9cb92a-c8c3-11eb-a78f-02427f02d9a2)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "Back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5b494cc544-529qb_kubernetes-dashboard(af9cb92a-c8c3-11eb-a78f-02427f02d9a2)"
	Jun 09 01:41:12 old-k8s-version-20210609012901-9941 kubelet[6233]: E0609 01:41:12.012609    6233 pod_workers.go:190] Error syncing pod afea2287-c8c3-11eb-a78f-02427f02d9a2 ("metrics-server-8546d8b77b-lqx7b_kube-system(afea2287-c8c3-11eb-a78f-02427f02d9a2)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/k8s.gcr.io/echoserver:1.4\""
	Jun 09 01:41:21 old-k8s-version-20210609012901-9941 kubelet[6233]: E0609 01:41:21.011631    6233 pod_workers.go:190] Error syncing pod af9cb92a-c8c3-11eb-a78f-02427f02d9a2 ("dashboard-metrics-scraper-5b494cc544-529qb_kubernetes-dashboard(af9cb92a-c8c3-11eb-a78f-02427f02d9a2)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "Back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5b494cc544-529qb_kubernetes-dashboard(af9cb92a-c8c3-11eb-a78f-02427f02d9a2)"
	Jun 09 01:41:27 old-k8s-version-20210609012901-9941 kubelet[6233]: E0609 01:41:27.040969    6233 remote_image.go:113] PullImage "fake.domain/k8s.gcr.io/echoserver:1.4" from image service failed: rpc error: code = Unknown desc = Error response from daemon: Get https://fake.domain/v2/: dial tcp: lookup fake.domain on 192.168.67.1:53: no such host
	Jun 09 01:41:27 old-k8s-version-20210609012901-9941 kubelet[6233]: E0609 01:41:27.041003    6233 kuberuntime_image.go:51] Pull image "fake.domain/k8s.gcr.io/echoserver:1.4" failed: rpc error: code = Unknown desc = Error response from daemon: Get https://fake.domain/v2/: dial tcp: lookup fake.domain on 192.168.67.1:53: no such host
	Jun 09 01:41:27 old-k8s-version-20210609012901-9941 kubelet[6233]: E0609 01:41:27.041051    6233 kuberuntime_manager.go:780] container start failed: ErrImagePull: rpc error: code = Unknown desc = Error response from daemon: Get https://fake.domain/v2/: dial tcp: lookup fake.domain on 192.168.67.1:53: no such host
	Jun 09 01:41:27 old-k8s-version-20210609012901-9941 kubelet[6233]: E0609 01:41:27.041074    6233 pod_workers.go:190] Error syncing pod afea2287-c8c3-11eb-a78f-02427f02d9a2 ("metrics-server-8546d8b77b-lqx7b_kube-system(afea2287-c8c3-11eb-a78f-02427f02d9a2)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = Error response from daemon: Get https://fake.domain/v2/: dial tcp: lookup fake.domain on 192.168.67.1:53: no such host"
	Jun 09 01:41:35 old-k8s-version-20210609012901-9941 kubelet[6233]: E0609 01:41:35.034469    6233 pod_workers.go:190] Error syncing pod af9cb92a-c8c3-11eb-a78f-02427f02d9a2 ("dashboard-metrics-scraper-5b494cc544-529qb_kubernetes-dashboard(af9cb92a-c8c3-11eb-a78f-02427f02d9a2)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "Back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5b494cc544-529qb_kubernetes-dashboard(af9cb92a-c8c3-11eb-a78f-02427f02d9a2)"
	Jun 09 01:41:39 old-k8s-version-20210609012901-9941 kubelet[6233]: E0609 01:41:39.538621    6233 pod_workers.go:190] Error syncing pod af9cb92a-c8c3-11eb-a78f-02427f02d9a2 ("dashboard-metrics-scraper-5b494cc544-529qb_kubernetes-dashboard(af9cb92a-c8c3-11eb-a78f-02427f02d9a2)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "Back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5b494cc544-529qb_kubernetes-dashboard(af9cb92a-c8c3-11eb-a78f-02427f02d9a2)"
	Jun 09 01:41:40 old-k8s-version-20210609012901-9941 kubelet[6233]: E0609 01:41:40.012660    6233 pod_workers.go:190] Error syncing pod afea2287-c8c3-11eb-a78f-02427f02d9a2 ("metrics-server-8546d8b77b-lqx7b_kube-system(afea2287-c8c3-11eb-a78f-02427f02d9a2)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/k8s.gcr.io/echoserver:1.4\""
	Jun 09 01:41:52 old-k8s-version-20210609012901-9941 kubelet[6233]: E0609 01:41:52.011734    6233 pod_workers.go:190] Error syncing pod af9cb92a-c8c3-11eb-a78f-02427f02d9a2 ("dashboard-metrics-scraper-5b494cc544-529qb_kubernetes-dashboard(af9cb92a-c8c3-11eb-a78f-02427f02d9a2)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "Back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5b494cc544-529qb_kubernetes-dashboard(af9cb92a-c8c3-11eb-a78f-02427f02d9a2)"
	Jun 09 01:41:53 old-k8s-version-20210609012901-9941 kubelet[6233]: E0609 01:41:53.012733    6233 pod_workers.go:190] Error syncing pod afea2287-c8c3-11eb-a78f-02427f02d9a2 ("metrics-server-8546d8b77b-lqx7b_kube-system(afea2287-c8c3-11eb-a78f-02427f02d9a2)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/k8s.gcr.io/echoserver:1.4\""
	
	* 
	* ==> kubernetes-dashboard [9230420d066a] <==
	* 2021/06/09 01:40:37 Using namespace: kubernetes-dashboard
	2021/06/09 01:40:37 Using in-cluster config to connect to apiserver
	2021/06/09 01:40:37 Using secret token for csrf signing
	2021/06/09 01:40:37 Initializing csrf token from kubernetes-dashboard-csrf secret
	2021/06/09 01:40:37 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2021/06/09 01:40:37 Successful initial request to the apiserver, version: v1.14.0
	2021/06/09 01:40:37 Generating JWE encryption key
	2021/06/09 01:40:37 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2021/06/09 01:40:37 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2021/06/09 01:40:37 Initializing JWE encryption key from synchronized object
	2021/06/09 01:40:37 Creating in-cluster Sidecar client
	2021/06/09 01:40:37 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2021/06/09 01:40:37 Serving insecurely on HTTP port: 9090
	2021/06/09 01:41:07 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2021/06/09 01:41:37 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2021/06/09 01:40:37 Starting overwatch
	
	* 
	* ==> storage-provisioner [d27ec4783cae] <==
	* I0609 01:40:36.443365       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0609 01:40:36.452888       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0609 01:40:36.452950       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0609 01:40:36.459951       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0609 01:40:36.460148       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_old-k8s-version-20210609012901-9941_fc86cf60-f6d1-4320-9b0a-458b591f1e4d!
	I0609 01:40:36.461060       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"af273732-c8c3-11eb-a78f-02427f02d9a2", APIVersion:"v1", ResourceVersion:"494", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' old-k8s-version-20210609012901-9941_fc86cf60-f6d1-4320-9b0a-458b591f1e4d became leader
	I0609 01:40:36.560264       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_old-k8s-version-20210609012901-9941_fc86cf60-f6d1-4320-9b0a-458b591f1e4d!
	

                                                
                                                
-- /stdout --
helpers_test.go:250: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-20210609012901-9941 -n old-k8s-version-20210609012901-9941
E0609 01:42:04.497051    9941 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/profiles/default-k8s-different-port-20210609012935-9941/client.crt: no such file or directory
helpers_test.go:257: (dbg) Run:  kubectl --context old-k8s-version-20210609012901-9941 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:263: non-running pods: metrics-server-8546d8b77b-lqx7b
helpers_test.go:265: ======> post-mortem[TestStartStop/group/old-k8s-version/serial/Pause]: describe non-running pods <======
helpers_test.go:268: (dbg) Run:  kubectl --context old-k8s-version-20210609012901-9941 describe pod metrics-server-8546d8b77b-lqx7b
helpers_test.go:268: (dbg) Non-zero exit: kubectl --context old-k8s-version-20210609012901-9941 describe pod metrics-server-8546d8b77b-lqx7b: exit status 1 (82.688697ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-8546d8b77b-lqx7b" not found

                                                
                                                
** /stderr **
helpers_test.go:270: kubectl --context old-k8s-version-20210609012901-9941 describe pod metrics-server-8546d8b77b-lqx7b: exit status 1
helpers_test.go:218: -----------------------post-mortem--------------------------------
helpers_test.go:226: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/Pause]: docker inspect <======
helpers_test.go:227: (dbg) Run:  docker inspect old-k8s-version-20210609012901-9941
helpers_test.go:231: (dbg) docker inspect old-k8s-version-20210609012901-9941:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "91dce77935ba4eff85240097e434e182e33003cb23a9a85ae8537d003069c32f",
	        "Created": "2021-06-09T01:32:22.976408213Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 300855,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2021-06-09T01:34:39.439780041Z",
	            "FinishedAt": "2021-06-09T01:34:37.912284168Z"
	        },
	        "Image": "sha256:9fce26cb202ecbcb479d0e9dcc943ed048e5957c0bb68667d9476ebc413ee6d7",
	        "ResolvConfPath": "/var/lib/docker/containers/91dce77935ba4eff85240097e434e182e33003cb23a9a85ae8537d003069c32f/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/91dce77935ba4eff85240097e434e182e33003cb23a9a85ae8537d003069c32f/hostname",
	        "HostsPath": "/var/lib/docker/containers/91dce77935ba4eff85240097e434e182e33003cb23a9a85ae8537d003069c32f/hosts",
	        "LogPath": "/var/lib/docker/containers/91dce77935ba4eff85240097e434e182e33003cb23a9a85ae8537d003069c32f/91dce77935ba4eff85240097e434e182e33003cb23a9a85ae8537d003069c32f-json.log",
	        "Name": "/old-k8s-version-20210609012901-9941",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "old-k8s-version-20210609012901-9941:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "old-k8s-version-20210609012901-9941",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "CapAdd": null,
	            "CapDrop": null,
	            "Capabilities": null,
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 0,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": null,
	            "BlkioDeviceWriteBps": null,
	            "BlkioDeviceReadIOps": null,
	            "BlkioDeviceWriteIOps": null,
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "KernelMemory": 0,
	            "KernelMemoryTCP": 0,
	            "MemoryReservation": 0,
	            "MemorySwap": 0,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/e06a36ecd342e83282e80589cd0b96a25668d1c022258253a30c4abc82951614-init/diff:/var/lib/docker/overlay2/bc56a5d6f9b885d4e990c356e0ccfc01ecbed88f252ebfaa9441de3180832d7f/diff:/var/lib/docker/overlay2/25b993e35a4369dc1c3bb5a1579e6e35329eea51bcbd403abb32859a67061a54/diff:/var/lib/docker/overlay2/1fe8141f79894ceaa71723e3cebb26aaf6eb09b92957f7ef1ad563a53df17477/diff:/var/lib/docker/overlay2/c43074dca065bc9311721e20aecd4b6af65294c44e7d9ff6f84a18717d22f9da/diff:/var/lib/docker/overlay2/1318b2c7f3cf224a7ccebeb69bbc1127489945bbb88c21f3171770868a161187/diff:/var/lib/docker/overlay2/c38fd14f646377d81cc91524a921d99d0518ca09e12d17c45948037013fd9100/diff:/var/lib/docker/overlay2/3860f2d47e6d7da92eb5946fda824e25f4c789d00d7e8daa71d0200aac14b536/diff:/var/lib/docker/overlay2/f55aac0c255ec87a42f4d6bc6e79a51ccac3a1d472b1ef4565f141af1acedb04/diff:/var/lib/docker/overlay2/7a1f3b94ec1a7fec96e3f1c789cb025636706f45db2f63cafd48827296910d1d/diff:/var/lib/docker/overlay2/653b9d
24f60635898ac8c6e1b372c54937a708e1e483d47012bc30c58bba0c8c/diff:/var/lib/docker/overlay2/c1832b167afb6406029f607ff5bfad73774ce698299c2b90633d157123654c52/diff:/var/lib/docker/overlay2/75fc291915e6994891ddc9a151bd4c24056ab74e6c8428ba1aef2b2949bbc56e/diff:/var/lib/docker/overlay2/8187764e5fdd094760f8daef22c41c28995fd009c1c56d956db1bb78266b84b2/diff:/var/lib/docker/overlay2/8257db85fb8192780c9e79b131704c61b85e47f9e5c7152097b1a341d06f5840/diff:/var/lib/docker/overlay2/e7499e6556225f397b775719266146f16285f25036f4cf348b09e2fd3be18982/diff:/var/lib/docker/overlay2/84dea696e080b4925128f5b32c22c548c34a63a9dfafa5cb45a932dded279620/diff:/var/lib/docker/overlay2/0646a50eb26264b2a4349823800615095034ab376268714c37e1193106307a2a/diff:/var/lib/docker/overlay2/873d4336e86132442a84ef0da60e4f8fdf8e4989093c0f2a4279120e10ad4f2c/diff:/var/lib/docker/overlay2/44007c68fc2016e815ed96a5faadd25bfb35c362bf1b0521c430ef2ea3805f42/diff:/var/lib/docker/overlay2/7f832f8cf06c783bc6789b50392d803201e52f6baa4a788b5ce48169c94316eb/diff:/var/lib/d
ocker/overlay2/aa919f3d56d7f8b40e56ee381db724e83ee09c96eb696e67326ae47e81324228/diff:/var/lib/docker/overlay2/c53704cae60bb8bd8b355c2d6fb142c9e105dbfeeece4ba9ee0eb81aaaa83fe9/diff:/var/lib/docker/overlay2/1d80475a809da44174d557238fbb00860567d808a157fc2291ac5fedb6f8b2d2/diff:/var/lib/docker/overlay2/d7e1256a346a88b7ce7e6fe9d6ab1146a2c7705c99fcb974ad10b671573b6b83/diff:/var/lib/docker/overlay2/67dc882ee4f992f5a9dc58b56bf7d7a6e78ffe50ccd6227d33d9e2047b7ff877/diff:/var/lib/docker/overlay2/156a8e643f241fdf84afe135ad766dbedd0c515a725939d012de628eb9dd2013/diff:/var/lib/docker/overlay2/ee244a7deb19ed9dc719af435d92c54624874690ce0999c7d030e2f57ecb9e6a/diff:/var/lib/docker/overlay2/91f8a889599c1faaa7f40cc449793deff620d17e83e88dac22c223f131237b12/diff:/var/lib/docker/overlay2/fa8fc61ecf97cd7f2b96efc9d54ba3d9a5b32dcdbb844f360ee173af8fae43a7/diff:/var/lib/docker/overlay2/908106b57878c9eeda6e0d202eee052dee30050250f2a3e5c7d61739d6548623/diff:/var/lib/docker/overlay2/98083c942683a1ac5defcb4b953ba78bbab830ad8c88c4dd145379ebe55
e20a9/diff:/var/lib/docker/overlay2/980703603c9fd3a987c703f9800e56f69031cc7d19f3c692d95eb0937cbb5fd7/diff:/var/lib/docker/overlay2/bc7be9aeb566f06fe346d144629a571aec3e378e82aedf4d6c3fb065569091b2/diff:/var/lib/docker/overlay2/e61aabb9eb2161801d4795e4a00f41afd54c504a52aeeef70d49d2a4f47fcd99/diff:/var/lib/docker/overlay2/a69e80d9160e6158cf9f37881d60928bf3221341b1fffe8d2855488233278102/diff:/var/lib/docker/overlay2/f76fd1ba3588d22f5228ab597df7a62e20a79217c1712dbc33e20061e12891c6/diff",
	                "MergedDir": "/var/lib/docker/overlay2/e06a36ecd342e83282e80589cd0b96a25668d1c022258253a30c4abc82951614/merged",
	                "UpperDir": "/var/lib/docker/overlay2/e06a36ecd342e83282e80589cd0b96a25668d1c022258253a30c4abc82951614/diff",
	                "WorkDir": "/var/lib/docker/overlay2/e06a36ecd342e83282e80589cd0b96a25668d1c022258253a30c4abc82951614/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "old-k8s-version-20210609012901-9941",
	                "Source": "/var/lib/docker/volumes/old-k8s-version-20210609012901-9941/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "old-k8s-version-20210609012901-9941",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase:v0.0.23@sha256:baf6d94b2050bcbecd98994e265cf965a4f4768978620ccf5227a6dcb75ade45",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "old-k8s-version-20210609012901-9941",
	                "name.minikube.sigs.k8s.io": "old-k8s-version-20210609012901-9941",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "1aaecc7a078c61af85d4e6c7c12ffcbc3226c3c0b6bdcdb83ef76e454d99e1ed",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32960"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32959"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32956"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32958"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32957"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/1aaecc7a078c",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "old-k8s-version-20210609012901-9941": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.67.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "91dce77935ba"
	                    ],
	                    "NetworkID": "3b40e12707af96d7a87ef0baaec85159df278a3dc4bf817ecae3932e0bcfbdd2",
	                    "EndpointID": "c1650ce3840b80594246acc2f9fcfa432a39e6b48bada03c110930f25ecac707",
	                    "Gateway": "192.168.67.1",
	                    "IPAddress": "192.168.67.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:43:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:235: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-20210609012901-9941 -n old-k8s-version-20210609012901-9941
helpers_test.go:240: <<< TestStartStop/group/old-k8s-version/serial/Pause FAILED: start of post-mortem logs <<<
helpers_test.go:241: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/Pause]: minikube logs <======
helpers_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-20210609012901-9941 logs -n 25
helpers_test.go:243: (dbg) Done: out/minikube-linux-amd64 -p old-k8s-version-20210609012901-9941 logs -n 25: (1.105633065s)
helpers_test.go:248: TestStartStop/group/old-k8s-version/serial/Pause logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|------------------------------------------------------------|------------------------------------------------|---------|----------------|-------------------------------|-------------------------------|
	| Command |                            Args                            |                    Profile                     |  User   |    Version     |          Start Time           |           End Time            |
	|---------|------------------------------------------------------------|------------------------------------------------|---------|----------------|-------------------------------|-------------------------------|
	| addons  | enable dashboard -p                                        | newest-cni-20210609013655-9941                 | jenkins | v1.21.0-beta.0 | Wed, 09 Jun 2021 01:37:48 UTC | Wed, 09 Jun 2021 01:37:48 UTC |
	|         | newest-cni-20210609013655-9941                             |                                                |         |                |                               |                               |
	|         | --images=MetricsScraper=k8s.gcr.io/echoserver:1.4          |                                                |         |                |                               |                               |
	| start   | -p                                                         | embed-certs-20210609012903-9941                | jenkins | v1.21.0-beta.0 | Wed, 09 Jun 2021 01:31:33 UTC | Wed, 09 Jun 2021 01:37:54 UTC |
	|         | embed-certs-20210609012903-9941                            |                                                |         |                |                               |                               |
	|         | --memory=2200 --alsologtostderr                            |                                                |         |                |                               |                               |
	|         | --wait=true --embed-certs                                  |                                                |         |                |                               |                               |
	|         | --driver=docker                                            |                                                |         |                |                               |                               |
	|         | --container-runtime=docker                                 |                                                |         |                |                               |                               |
	|         | --kubernetes-version=v1.20.7                               |                                                |         |                |                               |                               |
	| ssh     | -p                                                         | embed-certs-20210609012903-9941                | jenkins | v1.21.0-beta.0 | Wed, 09 Jun 2021 01:38:05 UTC | Wed, 09 Jun 2021 01:38:05 UTC |
	|         | embed-certs-20210609012903-9941                            |                                                |         |                |                               |                               |
	|         | sudo crictl images -o json                                 |                                                |         |                |                               |                               |
	| pause   | -p                                                         | embed-certs-20210609012903-9941                | jenkins | v1.21.0-beta.0 | Wed, 09 Jun 2021 01:38:05 UTC | Wed, 09 Jun 2021 01:38:06 UTC |
	|         | embed-certs-20210609012903-9941                            |                                                |         |                |                               |                               |
	|         | --alsologtostderr -v=1                                     |                                                |         |                |                               |                               |
	| start   | -p newest-cni-20210609013655-9941 --memory=2200            | newest-cni-20210609013655-9941                 | jenkins | v1.21.0-beta.0 | Wed, 09 Jun 2021 01:37:48 UTC | Wed, 09 Jun 2021 01:38:07 UTC |
	|         | --alsologtostderr --wait=apiserver,system_pods,default_sa  |                                                |         |                |                               |                               |
	|         | --feature-gates ServerSideApply=true --network-plugin=cni  |                                                |         |                |                               |                               |
	|         | --extra-config=kubelet.network-plugin=cni                  |                                                |         |                |                               |                               |
	|         | --extra-config=kubeadm.pod-network-cidr=192.168.111.111/16 |                                                |         |                |                               |                               |
	|         | --driver=docker  --container-runtime=docker                |                                                |         |                |                               |                               |
	|         | --kubernetes-version=v1.22.0-alpha.2                       |                                                |         |                |                               |                               |
	| unpause | -p                                                         | embed-certs-20210609012903-9941                | jenkins | v1.21.0-beta.0 | Wed, 09 Jun 2021 01:38:07 UTC | Wed, 09 Jun 2021 01:38:07 UTC |
	|         | embed-certs-20210609012903-9941                            |                                                |         |                |                               |                               |
	|         | --alsologtostderr -v=1                                     |                                                |         |                |                               |                               |
	| ssh     | -p                                                         | newest-cni-20210609013655-9941                 | jenkins | v1.21.0-beta.0 | Wed, 09 Jun 2021 01:38:07 UTC | Wed, 09 Jun 2021 01:38:08 UTC |
	|         | newest-cni-20210609013655-9941                             |                                                |         |                |                               |                               |
	|         | sudo crictl images -o json                                 |                                                |         |                |                               |                               |
	| pause   | -p                                                         | newest-cni-20210609013655-9941                 | jenkins | v1.21.0-beta.0 | Wed, 09 Jun 2021 01:38:08 UTC | Wed, 09 Jun 2021 01:38:08 UTC |
	|         | newest-cni-20210609013655-9941                             |                                                |         |                |                               |                               |
	|         | --alsologtostderr -v=1                                     |                                                |         |                |                               |                               |
	| unpause | -p                                                         | newest-cni-20210609013655-9941                 | jenkins | v1.21.0-beta.0 | Wed, 09 Jun 2021 01:38:09 UTC | Wed, 09 Jun 2021 01:38:10 UTC |
	|         | newest-cni-20210609013655-9941                             |                                                |         |                |                               |                               |
	|         | --alsologtostderr -v=1                                     |                                                |         |                |                               |                               |
	| delete  | -p                                                         | embed-certs-20210609012903-9941                | jenkins | v1.21.0-beta.0 | Wed, 09 Jun 2021 01:38:08 UTC | Wed, 09 Jun 2021 01:38:11 UTC |
	|         | embed-certs-20210609012903-9941                            |                                                |         |                |                               |                               |
	| delete  | -p                                                         | embed-certs-20210609012903-9941                | jenkins | v1.21.0-beta.0 | Wed, 09 Jun 2021 01:38:11 UTC | Wed, 09 Jun 2021 01:38:12 UTC |
	|         | embed-certs-20210609012903-9941                            |                                                |         |                |                               |                               |
	| delete  | -p                                                         | newest-cni-20210609013655-9941                 | jenkins | v1.21.0-beta.0 | Wed, 09 Jun 2021 01:38:11 UTC | Wed, 09 Jun 2021 01:38:14 UTC |
	|         | newest-cni-20210609013655-9941                             |                                                |         |                |                               |                               |
	| delete  | -p                                                         | newest-cni-20210609013655-9941                 | jenkins | v1.21.0-beta.0 | Wed, 09 Jun 2021 01:38:14 UTC | Wed, 09 Jun 2021 01:38:14 UTC |
	|         | newest-cni-20210609013655-9941                             |                                                |         |                |                               |                               |
	| start   | -p false-20210609012810-9941                               | false-20210609012810-9941                      | jenkins | v1.21.0-beta.0 | Wed, 09 Jun 2021 01:38:14 UTC | Wed, 09 Jun 2021 01:39:52 UTC |
	|         | --memory=2048                                              |                                                |         |                |                               |                               |
	|         | --alsologtostderr                                          |                                                |         |                |                               |                               |
	|         | --wait=true --wait-timeout=5m                              |                                                |         |                |                               |                               |
	|         | --cni=false --driver=docker                                |                                                |         |                |                               |                               |
	|         | --container-runtime=docker                                 |                                                |         |                |                               |                               |
	| ssh     | -p false-20210609012810-9941                               | false-20210609012810-9941                      | jenkins | v1.21.0-beta.0 | Wed, 09 Jun 2021 01:39:52 UTC | Wed, 09 Jun 2021 01:39:52 UTC |
	|         | pgrep -a kubelet                                           |                                                |         |                |                               |                               |
	| delete  | -p false-20210609012810-9941                               | false-20210609012810-9941                      | jenkins | v1.21.0-beta.0 | Wed, 09 Jun 2021 01:40:07 UTC | Wed, 09 Jun 2021 01:40:10 UTC |
	| start   | -p                                                         | default-k8s-different-port-20210609012935-9941 | jenkins | v1.21.0-beta.0 | Wed, 09 Jun 2021 01:32:07 UTC | Wed, 09 Jun 2021 01:40:19 UTC |
	|         | default-k8s-different-port-20210609012935-9941             |                                                |         |                |                               |                               |
	|         | --memory=2200 --alsologtostderr                            |                                                |         |                |                               |                               |
	|         | --wait=true --apiserver-port=8444                          |                                                |         |                |                               |                               |
	|         | --driver=docker  --container-runtime=docker                |                                                |         |                |                               |                               |
	|         | --kubernetes-version=v1.20.7                               |                                                |         |                |                               |                               |
	| ssh     | -p                                                         | default-k8s-different-port-20210609012935-9941 | jenkins | v1.21.0-beta.0 | Wed, 09 Jun 2021 01:40:29 UTC | Wed, 09 Jun 2021 01:40:30 UTC |
	|         | default-k8s-different-port-20210609012935-9941             |                                                |         |                |                               |                               |
	|         | sudo crictl images -o json                                 |                                                |         |                |                               |                               |
	| pause   | -p                                                         | default-k8s-different-port-20210609012935-9941 | jenkins | v1.21.0-beta.0 | Wed, 09 Jun 2021 01:40:30 UTC | Wed, 09 Jun 2021 01:40:30 UTC |
	|         | default-k8s-different-port-20210609012935-9941             |                                                |         |                |                               |                               |
	|         | --alsologtostderr -v=1                                     |                                                |         |                |                               |                               |
	| unpause | -p                                                         | default-k8s-different-port-20210609012935-9941 | jenkins | v1.21.0-beta.0 | Wed, 09 Jun 2021 01:40:31 UTC | Wed, 09 Jun 2021 01:40:32 UTC |
	|         | default-k8s-different-port-20210609012935-9941             |                                                |         |                |                               |                               |
	|         | --alsologtostderr -v=1                                     |                                                |         |                |                               |                               |
	| delete  | -p                                                         | default-k8s-different-port-20210609012935-9941 | jenkins | v1.21.0-beta.0 | Wed, 09 Jun 2021 01:40:32 UTC | Wed, 09 Jun 2021 01:40:36 UTC |
	|         | default-k8s-different-port-20210609012935-9941             |                                                |         |                |                               |                               |
	| delete  | -p                                                         | default-k8s-different-port-20210609012935-9941 | jenkins | v1.21.0-beta.0 | Wed, 09 Jun 2021 01:40:36 UTC | Wed, 09 Jun 2021 01:40:36 UTC |
	|         | default-k8s-different-port-20210609012935-9941             |                                                |         |                |                               |                               |
	| start   | -p                                                         | old-k8s-version-20210609012901-9941            | jenkins | v1.21.0-beta.0 | Wed, 09 Jun 2021 01:34:38 UTC | Wed, 09 Jun 2021 01:41:48 UTC |
	|         | old-k8s-version-20210609012901-9941                        |                                                |         |                |                               |                               |
	|         | --memory=2200 --alsologtostderr                            |                                                |         |                |                               |                               |
	|         | --wait=true --kvm-network=default                          |                                                |         |                |                               |                               |
	|         | --kvm-qemu-uri=qemu:///system                              |                                                |         |                |                               |                               |
	|         | --disable-driver-mounts                                    |                                                |         |                |                               |                               |
	|         | --keep-context=false                                       |                                                |         |                |                               |                               |
	|         | --driver=docker                                            |                                                |         |                |                               |                               |
	|         | --container-runtime=docker                                 |                                                |         |                |                               |                               |
	|         | --kubernetes-version=v1.14.0                               |                                                |         |                |                               |                               |
	| ssh     | -p                                                         | old-k8s-version-20210609012901-9941            | jenkins | v1.21.0-beta.0 | Wed, 09 Jun 2021 01:41:58 UTC | Wed, 09 Jun 2021 01:41:59 UTC |
	|         | old-k8s-version-20210609012901-9941                        |                                                |         |                |                               |                               |
	|         | sudo crictl images -o json                                 |                                                |         |                |                               |                               |
	| -p      | old-k8s-version-20210609012901-9941                        | old-k8s-version-20210609012901-9941            | jenkins | v1.21.0-beta.0 | Wed, 09 Jun 2021 01:42:02 UTC | Wed, 09 Jun 2021 01:42:04 UTC |
	|         | logs -n 25                                                 |                                                |         |                |                               |                               |
	|---------|------------------------------------------------------------|------------------------------------------------|---------|----------------|-------------------------------|-------------------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2021/06/09 01:40:36
	Running on machine: debian-jenkins-agent-1
	Binary: Built with gc go1.16.4 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0609 01:40:36.631110  352096 out.go:291] Setting OutFile to fd 1 ...
	I0609 01:40:36.631229  352096 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0609 01:40:36.631240  352096 out.go:304] Setting ErrFile to fd 2...
	I0609 01:40:36.631245  352096 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0609 01:40:36.631477  352096 root.go:316] Updating PATH: /home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/bin
	I0609 01:40:36.632033  352096 out.go:298] Setting JSON to false
	I0609 01:40:36.673982  352096 start.go:111] hostinfo: {"hostname":"debian-jenkins-agent-1","uptime":5000,"bootTime":1623197837,"procs":265,"os":"linux","platform":"debian","platformFamily":"debian","platformVersion":"9.13","kernelVersion":"4.9.0-15-amd64","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"c29e0b88-ef83-6765-d2fa-208fdce1af32"}
	I0609 01:40:36.674111  352096 start.go:121] virtualization: kvm guest
	I0609 01:40:36.676163  352096 out.go:170] * [calico-20210609012810-9941] minikube v1.21.0-beta.0 on Debian 9.13 (kvm/amd64)
	I0609 01:40:36.678185  352096 out.go:170]   - KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/kubeconfig
	I0609 01:40:36.679873  352096 out.go:170]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0609 01:40:36.681411  352096 out.go:170]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube
	I0609 01:40:36.683678  352096 out.go:170]   - MINIKUBE_LOCATION=11610
	I0609 01:40:36.685630  352096 driver.go:335] Setting default libvirt URI to qemu:///system
	I0609 01:40:36.743399  352096 docker.go:132] docker version: linux-19.03.15
	I0609 01:40:36.743512  352096 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	I0609 01:40:36.834766  352096 info.go:261] docker info: {ID:LQL6:IQEY:TAE3:NG47:ROZQ:WA5O:XM2B:XDCN:3VXZ:7JF3:4DHA:WN5N Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:133 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:false KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:37 OomKillDisable:true NGoroutines:45 SystemTime:2021-06-09 01:40:36.791625716 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:4.9.0-15-amd64 OperatingSystem:Debian GNU/Linux 9 (stretch) OSType:linux Architecture:x86_64 IndexServerAddr
ess:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33742209024 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:debian-jenkins-agent-1 Labels:[] ExperimentalBuild:false ServerVersion:19.03.15 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:269548fa27e0089a8b8278fc4fc781d7f65a939b Expected:269548fa27e0089a8b8278fc4fc781d7f65a939b} RuncCommit:{ID:ff819c7e9184c13b7c2607fe6c30ae19403a7aff Expected:ff819c7e9184c13b7c2607fe6c30ae19403a7aff} InitCommit:{ID:fec3683 Expected:fec3683} SecurityOptions:[name=seccomp,profile=default] ProductLicense: Warnin
gs:[WARNING: No swap limit support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[] Warnings:<nil>}}
	I0609 01:40:36.834840  352096 docker.go:244] overlay module found
	I0609 01:40:36.837087  352096 out.go:170] * Using the docker driver based on user configuration
	I0609 01:40:36.837110  352096 start.go:279] selected driver: docker
	I0609 01:40:36.837115  352096 start.go:752] validating driver "docker" against <nil>
	I0609 01:40:36.837133  352096 start.go:763] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc:}
	W0609 01:40:36.837178  352096 oci.go:119] Your kernel does not support memory limit capabilities or the cgroup is not mounted.
	W0609 01:40:36.837196  352096 out.go:235] ! Your cgroup does not allow setting memory.
	I0609 01:40:36.838992  352096 out.go:170]   - More information: https://docs.docker.com/engine/install/linux-postinstall/#your-kernel-does-not-support-cgroup-swap-limit-capabilities
	I0609 01:40:36.839863  352096 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	I0609 01:40:36.932062  352096 info.go:261] docker info: {ID:LQL6:IQEY:TAE3:NG47:ROZQ:WA5O:XM2B:XDCN:3VXZ:7JF3:4DHA:WN5N Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:133 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:false KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:37 OomKillDisable:true NGoroutines:45 SystemTime:2021-06-09 01:40:36.890557056 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:4.9.0-15-amd64 OperatingSystem:Debian GNU/Linux 9 (stretch) OSType:linux Architecture:x86_64 IndexServerAddr
ess:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33742209024 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:debian-jenkins-agent-1 Labels:[] ExperimentalBuild:false ServerVersion:19.03.15 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:269548fa27e0089a8b8278fc4fc781d7f65a939b Expected:269548fa27e0089a8b8278fc4fc781d7f65a939b} RuncCommit:{ID:ff819c7e9184c13b7c2607fe6c30ae19403a7aff Expected:ff819c7e9184c13b7c2607fe6c30ae19403a7aff} InitCommit:{ID:fec3683 Expected:fec3683} SecurityOptions:[name=seccomp,profile=default] ProductLicense: Warnin
gs:[WARNING: No swap limit support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[] Warnings:<nil>}}
	I0609 01:40:36.932180  352096 start_flags.go:259] no existing cluster config was found, will generate one from the flags 
	I0609 01:40:36.932334  352096 start_flags.go:656] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0609 01:40:36.932354  352096 cni.go:93] Creating CNI manager for "calico"
	I0609 01:40:36.932360  352096 start_flags.go:268] Found "Calico" CNI - setting NetworkPlugin=cni
	I0609 01:40:36.932385  352096 start_flags.go:273] config:
	{Name:calico-20210609012810-9941 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.23@sha256:baf6d94b2050bcbecd98994e265cf965a4f4768978620ccf5227a6dcb75ade45 Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.7 ClusterName:calico-20210609012810-9941 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: Netw
orkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:5m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false}
	I0609 01:40:36.934649  352096 out.go:170] * Starting control plane node calico-20210609012810-9941 in cluster calico-20210609012810-9941
	I0609 01:40:36.934693  352096 cache.go:115] Beginning downloading kic base image for docker with docker
	I0609 01:40:36.936147  352096 out.go:170] * Pulling base image ...
	I0609 01:40:36.936172  352096 preload.go:110] Checking if preload exists for k8s version v1.20.7 and runtime docker
	I0609 01:40:36.936194  352096 preload.go:125] Found local preload: /home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v11-v1.20.7-docker-overlay2-amd64.tar.lz4
	I0609 01:40:36.936205  352096 cache.go:54] Caching tarball of preloaded images
	I0609 01:40:36.936277  352096 cache.go:134] Downloading gcr.io/k8s-minikube/kicbase:v0.0.23@sha256:baf6d94b2050bcbecd98994e265cf965a4f4768978620ccf5227a6dcb75ade45 to local cache
	I0609 01:40:36.936357  352096 preload.go:166] Found /home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v11-v1.20.7-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0609 01:40:36.936376  352096 cache.go:57] Finished verifying existence of preloaded tar for  v1.20.7 on docker
	I0609 01:40:36.936388  352096 image.go:58] Checking for gcr.io/k8s-minikube/kicbase:v0.0.23@sha256:baf6d94b2050bcbecd98994e265cf965a4f4768978620ccf5227a6dcb75ade45 in local cache directory
	I0609 01:40:36.936410  352096 image.go:61] Found gcr.io/k8s-minikube/kicbase:v0.0.23@sha256:baf6d94b2050bcbecd98994e265cf965a4f4768978620ccf5227a6dcb75ade45 in local cache directory, skipping pull
	I0609 01:40:36.936420  352096 image.go:102] gcr.io/k8s-minikube/kicbase:v0.0.23@sha256:baf6d94b2050bcbecd98994e265cf965a4f4768978620ccf5227a6dcb75ade45 exists in cache, skipping pull
	I0609 01:40:36.936434  352096 cache.go:137] successfully saved gcr.io/k8s-minikube/kicbase:v0.0.23@sha256:baf6d94b2050bcbecd98994e265cf965a4f4768978620ccf5227a6dcb75ade45 as a tarball
	I0609 01:40:36.936440  352096 image.go:74] Checking for gcr.io/k8s-minikube/kicbase:v0.0.23@sha256:baf6d94b2050bcbecd98994e265cf965a4f4768978620ccf5227a6dcb75ade45 in local docker daemon
	I0609 01:40:36.936479  352096 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/profiles/calico-20210609012810-9941/config.json ...
	I0609 01:40:36.936497  352096 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/profiles/calico-20210609012810-9941/config.json: {Name:mk031fde7609ae3e97daec785ed839e7488473bf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0609 01:40:37.048612  352096 image.go:78] Found gcr.io/k8s-minikube/kicbase:v0.0.23@sha256:baf6d94b2050bcbecd98994e265cf965a4f4768978620ccf5227a6dcb75ade45 in local docker daemon, skipping pull
	I0609 01:40:37.048657  352096 cache.go:146] gcr.io/k8s-minikube/kicbase:v0.0.23@sha256:baf6d94b2050bcbecd98994e265cf965a4f4768978620ccf5227a6dcb75ade45 exists in daemon, skipping load
	I0609 01:40:37.048675  352096 cache.go:202] Successfully downloaded all kic artifacts
	I0609 01:40:37.048728  352096 start.go:313] acquiring machines lock for calico-20210609012810-9941: {Name:mkae53a330b20aaf52e1813b8aee573fcaaec970 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0609 01:40:37.048858  352096 start.go:317] acquired machines lock for "calico-20210609012810-9941" in 106.275µs
	I0609 01:40:37.048894  352096 start.go:89] Provisioning new machine with config: &{Name:calico-20210609012810-9941 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.23@sha256:baf6d94b2050bcbecd98994e265cf965a4f4768978620ccf5227a6dcb75ade45 Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.7 ClusterName:calico-20210609012810-9941 Namespace:default APIServerName:minikubeCA APIServerNames:
[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.7 ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:5m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false} &{Name: IP: Port:8443 KubernetesVersion:v1.20.7 ControlPlane:true Worker:true}
	I0609 01:40:37.049004  352096 start.go:126] createHost starting for "" (driver="docker")
	I0609 01:40:34.017726  329232 cli_runner.go:115] Run: docker container inspect auto-20210609012809-9941 --format={{.State.Status}}
	I0609 01:40:37.085772  329232 cli_runner.go:115] Run: docker container inspect auto-20210609012809-9941 --format={{.State.Status}}
	I0609 01:40:35.678351  300573 out.go:170] * Enabled addons: storage-provisioner, default-storageclass, metrics-server, dashboard
	I0609 01:40:35.678380  300573 addons.go:344] enableAddons completed in 2.095265934s
	I0609 01:40:35.865805  300573 pod_ready.go:102] pod "coredns-fb8b8dccf-ctgrx" in "kube-system" namespace has status "Ready":"False"
	I0609 01:40:38.366329  300573 pod_ready.go:102] pod "coredns-fb8b8dccf-ctgrx" in "kube-system" namespace has status "Ready":"False"
	I0609 01:40:35.493169  344705 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.20.7/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0609 01:40:35.992256  344705 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.20.7/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0609 01:40:36.492949  344705 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.20.7/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0609 01:40:36.992808  344705 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.20.7/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0609 01:40:37.492406  344705 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.20.7/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0609 01:40:37.992460  344705 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.20.7/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0609 01:40:38.492814  344705 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.20.7/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0609 01:40:38.993013  344705 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.20.7/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0609 01:40:39.492346  344705 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.20.7/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0609 01:40:39.992376  344705 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.20.7/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0609 01:40:37.051194  352096 out.go:197] * Creating docker container (CPUs=2, Memory=2048MB) ...
	I0609 01:40:37.051469  352096 start.go:160] libmachine.API.Create for "calico-20210609012810-9941" (driver="docker")
	I0609 01:40:37.051513  352096 client.go:168] LocalClient.Create starting
	I0609 01:40:37.051649  352096 main.go:128] libmachine: Reading certificate data from /home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/certs/ca.pem
	I0609 01:40:37.051689  352096 main.go:128] libmachine: Decoding PEM data...
	I0609 01:40:37.051712  352096 main.go:128] libmachine: Parsing certificate...
	I0609 01:40:37.051880  352096 main.go:128] libmachine: Reading certificate data from /home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/certs/cert.pem
	I0609 01:40:37.051910  352096 main.go:128] libmachine: Decoding PEM data...
	I0609 01:40:37.051926  352096 main.go:128] libmachine: Parsing certificate...
	I0609 01:40:37.052424  352096 cli_runner.go:115] Run: docker network inspect calico-20210609012810-9941 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0609 01:40:37.099637  352096 cli_runner.go:162] docker network inspect calico-20210609012810-9941 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0609 01:40:37.099719  352096 network_create.go:255] running [docker network inspect calico-20210609012810-9941] to gather additional debugging logs...
	I0609 01:40:37.099742  352096 cli_runner.go:115] Run: docker network inspect calico-20210609012810-9941
	W0609 01:40:37.138707  352096 cli_runner.go:162] docker network inspect calico-20210609012810-9941 returned with exit code 1
	I0609 01:40:37.138742  352096 network_create.go:258] error running [docker network inspect calico-20210609012810-9941]: docker network inspect calico-20210609012810-9941: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: calico-20210609012810-9941
	I0609 01:40:37.138765  352096 network_create.go:260] output of [docker network inspect calico-20210609012810-9941]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: calico-20210609012810-9941
	
	** /stderr **
	I0609 01:40:37.138809  352096 cli_runner.go:115] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0609 01:40:37.177770  352096 network.go:215] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName:br-3efa0710be1e IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:02:42:6f:c7:95:89}}
	I0609 01:40:37.178451  352096 network.go:263] reserving subnet 192.168.58.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[] amended:true}} dirty:map[192.168.58.0:0xc00072a3b8] misses:0}
	I0609 01:40:37.178494  352096 network.go:210] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0609 01:40:37.178511  352096 network_create.go:106] attempt to create docker network calico-20210609012810-9941 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500 ...
	I0609 01:40:37.178562  352096 cli_runner.go:115] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true calico-20210609012810-9941
	I0609 01:40:37.256968  352096 network_create.go:90] docker network calico-20210609012810-9941 192.168.58.0/24 created
	I0609 01:40:37.257004  352096 kic.go:106] calculated static IP "192.168.58.2" for the "calico-20210609012810-9941" container
	I0609 01:40:37.257070  352096 cli_runner.go:115] Run: docker ps -a --format {{.Names}}
	I0609 01:40:37.300737  352096 cli_runner.go:115] Run: docker volume create calico-20210609012810-9941 --label name.minikube.sigs.k8s.io=calico-20210609012810-9941 --label created_by.minikube.sigs.k8s.io=true
	I0609 01:40:37.340542  352096 oci.go:102] Successfully created a docker volume calico-20210609012810-9941
	I0609 01:40:37.340623  352096 cli_runner.go:115] Run: docker run --rm --name calico-20210609012810-9941-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=calico-20210609012810-9941 --entrypoint /usr/bin/test -v calico-20210609012810-9941:/var gcr.io/k8s-minikube/kicbase:v0.0.23@sha256:baf6d94b2050bcbecd98994e265cf965a4f4768978620ccf5227a6dcb75ade45 -d /var/lib
	I0609 01:40:38.148995  352096 oci.go:106] Successfully prepared a docker volume calico-20210609012810-9941
	W0609 01:40:38.149052  352096 oci.go:135] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W0609 01:40:38.149065  352096 oci.go:119] Your kernel does not support memory limit capabilities or the cgroup is not mounted.
	I0609 01:40:38.149126  352096 preload.go:110] Checking if preload exists for k8s version v1.20.7 and runtime docker
	I0609 01:40:38.149132  352096 cli_runner.go:115] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0609 01:40:38.149158  352096 kic.go:179] Starting extracting preloaded images to volume ...
	I0609 01:40:38.149224  352096 cli_runner.go:115] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v11-v1.20.7-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v calico-20210609012810-9941:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.23@sha256:baf6d94b2050bcbecd98994e265cf965a4f4768978620ccf5227a6dcb75ade45 -I lz4 -xf /preloaded.tar -C /extractDir
	I0609 01:40:38.241538  352096 cli_runner.go:115] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname calico-20210609012810-9941 --name calico-20210609012810-9941 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=calico-20210609012810-9941 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=calico-20210609012810-9941 --network calico-20210609012810-9941 --ip 192.168.58.2 --volume calico-20210609012810-9941:/var --security-opt apparmor=unconfined --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase:v0.0.23@sha256:baf6d94b2050bcbecd98994e265cf965a4f4768978620ccf5227a6dcb75ade45
	I0609 01:40:38.853918  352096 cli_runner.go:115] Run: docker container inspect calico-20210609012810-9941 --format={{.State.Running}}
	I0609 01:40:38.906203  352096 cli_runner.go:115] Run: docker container inspect calico-20210609012810-9941 --format={{.State.Status}}
	I0609 01:40:38.959124  352096 cli_runner.go:115] Run: docker exec calico-20210609012810-9941 stat /var/lib/dpkg/alternatives/iptables
	I0609 01:40:39.108798  352096 oci.go:278] the created container "calico-20210609012810-9941" has a running status.
	I0609 01:40:39.108836  352096 kic.go:210] Creating ssh key for kic: /home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/machines/calico-20210609012810-9941/id_rsa...
	I0609 01:40:39.198235  352096 kic_runner.go:188] docker (temp): /home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/machines/calico-20210609012810-9941/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0609 01:40:39.602006  352096 cli_runner.go:115] Run: docker container inspect calico-20210609012810-9941 --format={{.State.Status}}
	I0609 01:40:39.652085  352096 kic_runner.go:94] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0609 01:40:39.652109  352096 kic_runner.go:115] Args: [docker exec --privileged calico-20210609012810-9941 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0609 01:40:40.132328  329232 cli_runner.go:115] Run: docker container inspect auto-20210609012809-9941 --format={{.State.Status}}
	I0609 01:40:40.865096  300573 pod_ready.go:102] pod "coredns-fb8b8dccf-ctgrx" in "kube-system" namespace has status "Ready":"False"
	I0609 01:40:42.865643  300573 pod_ready.go:102] pod "coredns-fb8b8dccf-ctgrx" in "kube-system" namespace has status "Ready":"False"
	I0609 01:40:41.950654  352096 cli_runner.go:168] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v11-v1.20.7-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v calico-20210609012810-9941:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.23@sha256:baf6d94b2050bcbecd98994e265cf965a4f4768978620ccf5227a6dcb75ade45 -I lz4 -xf /preloaded.tar -C /extractDir: (3.801357977s)
	I0609 01:40:41.950723  352096 kic.go:188] duration metric: took 3.801562 seconds to extract preloaded images to volume
	I0609 01:40:41.950817  352096 cli_runner.go:115] Run: docker container inspect calico-20210609012810-9941 --format={{.State.Status}}
	I0609 01:40:41.990470  352096 machine.go:88] provisioning docker machine ...
	I0609 01:40:41.990506  352096 ubuntu.go:169] provisioning hostname "calico-20210609012810-9941"
	I0609 01:40:41.990596  352096 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20210609012810-9941
	I0609 01:40:42.031665  352096 main.go:128] libmachine: Using SSH client type: native
	I0609 01:40:42.031889  352096 main.go:128] libmachine: &{{{<nil> 0 [] [] []} docker [0x802f80] 0x802f40 <nil>  [] 0s} 127.0.0.1 32985 <nil> <nil>}
	I0609 01:40:42.031912  352096 main.go:128] libmachine: About to run SSH command:
	sudo hostname calico-20210609012810-9941 && echo "calico-20210609012810-9941" | sudo tee /etc/hostname
	I0609 01:40:42.168989  352096 main.go:128] libmachine: SSH cmd err, output: <nil>: calico-20210609012810-9941
	
	I0609 01:40:42.169058  352096 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20210609012810-9941
	I0609 01:40:42.214838  352096 main.go:128] libmachine: Using SSH client type: native
	I0609 01:40:42.214999  352096 main.go:128] libmachine: &{{{<nil> 0 [] [] []} docker [0x802f80] 0x802f40 <nil>  [] 0s} 127.0.0.1 32985 <nil> <nil>}
	I0609 01:40:42.215023  352096 main.go:128] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\scalico-20210609012810-9941' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 calico-20210609012810-9941/g' /etc/hosts;
				else 
					echo '127.0.1.1 calico-20210609012810-9941' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0609 01:40:42.332932  352096 main.go:128] libmachine: SSH cmd err, output: <nil>: 
	I0609 01:40:42.332992  352096 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube CaCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/certs/key.pem ServerCertR
emotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube}
	I0609 01:40:42.333032  352096 ubuntu.go:177] setting up certificates
	I0609 01:40:42.333040  352096 provision.go:83] configureAuth start
	I0609 01:40:42.333091  352096 cli_runner.go:115] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" calico-20210609012810-9941
	I0609 01:40:42.372958  352096 provision.go:137] copyHostCerts
	I0609 01:40:42.373013  352096 exec_runner.go:145] found /home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/ca.pem, removing ...
	I0609 01:40:42.373030  352096 exec_runner.go:190] rm: /home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/ca.pem
	I0609 01:40:42.373084  352096 exec_runner.go:152] cp: /home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/ca.pem (1082 bytes)
	I0609 01:40:42.373174  352096 exec_runner.go:145] found /home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/cert.pem, removing ...
	I0609 01:40:42.373185  352096 exec_runner.go:190] rm: /home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/cert.pem
	I0609 01:40:42.373208  352096 exec_runner.go:152] cp: /home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/cert.pem (1123 bytes)
	I0609 01:40:42.373272  352096 exec_runner.go:145] found /home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/key.pem, removing ...
	I0609 01:40:42.373298  352096 exec_runner.go:190] rm: /home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/key.pem
	I0609 01:40:42.373324  352096 exec_runner.go:152] cp: /home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/key.pem (1679 bytes)
	I0609 01:40:42.373372  352096 provision.go:111] generating server cert: /home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/certs/ca-key.pem org=jenkins.calico-20210609012810-9941 san=[192.168.58.2 127.0.0.1 localhost 127.0.0.1 minikube calico-20210609012810-9941]
	I0609 01:40:42.470940  352096 provision.go:171] copyRemoteCerts
	I0609 01:40:42.470996  352096 ssh_runner.go:149] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0609 01:40:42.471030  352096 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20210609012810-9941
	I0609 01:40:42.516819  352096 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32985 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/machines/calico-20210609012810-9941/id_rsa Username:docker}
	I0609 01:40:42.604293  352096 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0609 01:40:42.620326  352096 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0609 01:40:42.635125  352096 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/machines/server.pem --> /etc/docker/server.pem (1253 bytes)
	I0609 01:40:42.650438  352096 provision.go:86] duration metric: configureAuth took 317.389022ms
	I0609 01:40:42.650459  352096 ubuntu.go:193] setting minikube options for container-runtime
	I0609 01:40:42.650643  352096 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20210609012810-9941
	I0609 01:40:42.690608  352096 main.go:128] libmachine: Using SSH client type: native
	I0609 01:40:42.690768  352096 main.go:128] libmachine: &{{{<nil> 0 [] [] []} docker [0x802f80] 0x802f40 <nil>  [] 0s} 127.0.0.1 32985 <nil> <nil>}
	I0609 01:40:42.690789  352096 main.go:128] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0609 01:40:42.809400  352096 main.go:128] libmachine: SSH cmd err, output: <nil>: overlay
	
	I0609 01:40:42.809436  352096 ubuntu.go:71] root file system type: overlay
	I0609 01:40:42.809629  352096 provision.go:308] Updating docker unit: /lib/systemd/system/docker.service ...
	I0609 01:40:42.809695  352096 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20210609012810-9941
	I0609 01:40:42.849952  352096 main.go:128] libmachine: Using SSH client type: native
	I0609 01:40:42.850124  352096 main.go:128] libmachine: &{{{<nil> 0 [] [] []} docker [0x802f80] 0x802f40 <nil>  [] 0s} 127.0.0.1 32985 <nil> <nil>}
	I0609 01:40:42.850223  352096 main.go:128] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0609 01:40:42.982970  352096 main.go:128] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0609 01:40:42.983065  352096 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20210609012810-9941
	I0609 01:40:43.031885  352096 main.go:128] libmachine: Using SSH client type: native
	I0609 01:40:43.032086  352096 main.go:128] libmachine: &{{{<nil> 0 [] [] []} docker [0x802f80] 0x802f40 <nil>  [] 0s} 127.0.0.1 32985 <nil> <nil>}
	I0609 01:40:43.032118  352096 main.go:128] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0609 01:40:43.625675  352096 main.go:128] libmachine: SSH cmd err, output: <nil>: --- /lib/systemd/system/docker.service	2021-06-02 11:54:50.000000000 +0000
	+++ /lib/systemd/system/docker.service.new	2021-06-09 01:40:42.981589018 +0000
	@@ -1,30 +1,32 @@
	 [Unit]
	 Description=Docker Application Container Engine
	 Documentation=https://docs.docker.com
	+BindsTo=containerd.service
	 After=network-online.target firewalld.service containerd.service
	 Wants=network-online.target
	-Requires=docker.socket containerd.service
	+Requires=docker.socket
	+StartLimitBurst=3
	+StartLimitIntervalSec=60
	 
	 [Service]
	 Type=notify
	-# the default is not to use systemd for cgroups because the delegate issues still
	-# exists and systemd currently does not support the cgroup feature set required
	-# for containers run by docker
	-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	-ExecReload=/bin/kill -s HUP $MAINPID
	-TimeoutSec=0
	-RestartSec=2
	-Restart=always
	-
	-# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229.
	-# Both the old, and new location are accepted by systemd 229 and up, so using the old location
	-# to make them work for either version of systemd.
	-StartLimitBurst=3
	+Restart=on-failure
	 
	-# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230.
	-# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make
	-# this option work for either version of systemd.
	-StartLimitInterval=60s
	+
	+
	+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	+# The base configuration already specifies an 'ExecStart=...' command. The first directive
	+# here is to clear out that command inherited from the base configuration. Without this,
	+# the command from the base configuration and the command specified here are treated as
	+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	+# will catch this invalid input and refuse to start the service with an error like:
	+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	+
	+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	+ExecStart=
	+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	+ExecReload=/bin/kill -s HUP $MAINPID
	 
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	@@ -32,16 +34,16 @@
	 LimitNPROC=infinity
	 LimitCORE=infinity
	 
	-# Comment TasksMax if your systemd version does not support it.
	-# Only systemd 226 and above support this option.
	+# Uncomment TasksMax if your systemd version supports it.
	+# Only systemd 226 and above support this version.
	 TasksMax=infinity
	+TimeoutStartSec=0
	 
	 # set delegate yes so that systemd does not reset the cgroups of docker containers
	 Delegate=yes
	 
	 # kill only the docker process, not all processes in the cgroup
	 KillMode=process
	-OOMScoreAdjust=-500
	 
	 [Install]
	 WantedBy=multi-user.target
	Synchronizing state of docker.service with SysV service script with /lib/systemd/systemd-sysv-install.
	Executing: /lib/systemd/systemd-sysv-install enable docker
	
	I0609 01:40:43.625711  352096 machine.go:91] provisioned docker machine in 1.635218617s
	I0609 01:40:43.625725  352096 client.go:171] LocalClient.Create took 6.574201593s
	I0609 01:40:43.625748  352096 start.go:168] duration metric: libmachine.API.Create for "calico-20210609012810-9941" took 6.574278241s
	I0609 01:40:43.625761  352096 start.go:267] post-start starting for "calico-20210609012810-9941" (driver="docker")
	I0609 01:40:43.625768  352096 start.go:277] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0609 01:40:43.625839  352096 ssh_runner.go:149] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0609 01:40:43.625883  352096 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20210609012810-9941
	I0609 01:40:43.667182  352096 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32985 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/machines/calico-20210609012810-9941/id_rsa Username:docker}
	I0609 01:40:43.752939  352096 ssh_runner.go:149] Run: cat /etc/os-release
	I0609 01:40:43.755722  352096 main.go:128] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0609 01:40:43.755749  352096 main.go:128] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0609 01:40:43.755763  352096 main.go:128] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0609 01:40:43.755771  352096 info.go:137] Remote host: Ubuntu 20.04.2 LTS
	I0609 01:40:43.755788  352096 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/addons for local assets ...
	I0609 01:40:43.755837  352096 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/files for local assets ...
	I0609 01:40:43.755931  352096 start.go:270] post-start completed in 130.162299ms
	I0609 01:40:43.756175  352096 cli_runner.go:115] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" calico-20210609012810-9941
	I0609 01:40:43.794853  352096 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/profiles/calico-20210609012810-9941/config.json ...
	I0609 01:40:43.795091  352096 ssh_runner.go:149] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0609 01:40:43.795138  352096 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20210609012810-9941
	I0609 01:40:43.833691  352096 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32985 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/machines/calico-20210609012810-9941/id_rsa Username:docker}
	I0609 01:40:43.917790  352096 start.go:129] duration metric: createHost completed in 6.868772218s
	I0609 01:40:43.917824  352096 start.go:80] releasing machines lock for "calico-20210609012810-9941", held for 6.868947784s
	I0609 01:40:43.917911  352096 cli_runner.go:115] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" calico-20210609012810-9941
	I0609 01:40:43.958012  352096 ssh_runner.go:149] Run: systemctl --version
	I0609 01:40:43.958067  352096 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20210609012810-9941
	I0609 01:40:43.958087  352096 ssh_runner.go:149] Run: curl -sS -m 2 https://k8s.gcr.io/
	I0609 01:40:43.958148  352096 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20210609012810-9941
	I0609 01:40:43.999990  352096 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32985 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/machines/calico-20210609012810-9941/id_rsa Username:docker}
	I0609 01:40:44.000156  352096 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32985 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/machines/calico-20210609012810-9941/id_rsa Username:docker}
	I0609 01:40:44.105048  352096 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service containerd
	I0609 01:40:44.113782  352096 ssh_runner.go:149] Run: sudo systemctl cat docker.service
	I0609 01:40:44.122327  352096 cruntime.go:225] skipping containerd shutdown because we are bound to it
	I0609 01:40:44.122397  352096 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service crio
	I0609 01:40:44.130910  352096 ssh_runner.go:149] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/dockershim.sock
	image-endpoint: unix:///var/run/dockershim.sock
	" | sudo tee /etc/crictl.yaml"
	I0609 01:40:44.142773  352096 ssh_runner.go:149] Run: sudo systemctl unmask docker.service
	I0609 01:40:44.201078  352096 ssh_runner.go:149] Run: sudo systemctl enable docker.socket
	I0609 01:40:44.256269  352096 ssh_runner.go:149] Run: sudo systemctl cat docker.service
	I0609 01:40:44.264833  352096 ssh_runner.go:149] Run: sudo systemctl daemon-reload
	I0609 01:40:44.317328  352096 ssh_runner.go:149] Run: sudo systemctl start docker
	I0609 01:40:44.325668  352096 ssh_runner.go:149] Run: docker version --format {{.Server.Version}}
	I0609 01:40:40.492907  344705 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.20.7/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0609 01:40:40.992189  344705 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.20.7/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0609 01:40:41.493228  344705 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.20.7/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0609 01:40:41.993005  344705 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.20.7/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0609 01:40:42.492386  344705 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.20.7/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0609 01:40:42.992261  344705 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.20.7/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0609 01:40:43.493058  344705 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.20.7/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0609 01:40:43.993022  344705 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.20.7/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0609 01:40:44.492490  344705 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.20.7/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0609 01:40:44.993036  344705 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.20.7/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0609 01:40:44.373093  352096 out.go:197] * Preparing Kubernetes v1.20.7 on Docker 20.10.7 ...
	I0609 01:40:44.373166  352096 cli_runner.go:115] Run: docker network inspect calico-20210609012810-9941 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0609 01:40:44.410011  352096 ssh_runner.go:149] Run: grep 192.168.58.1	host.minikube.internal$ /etc/hosts
	I0609 01:40:44.413077  352096 ssh_runner.go:149] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.58.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0609 01:40:44.422262  352096 localpath.go:92] copying /home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/client.crt -> /home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/profiles/calico-20210609012810-9941/client.crt
	I0609 01:40:44.422356  352096 localpath.go:117] copying /home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/client.key -> /home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/profiles/calico-20210609012810-9941/client.key
	I0609 01:40:44.422503  352096 preload.go:110] Checking if preload exists for k8s version v1.20.7 and runtime docker
	I0609 01:40:44.422549  352096 ssh_runner.go:149] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0609 01:40:44.461776  352096 docker.go:535] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-proxy:v1.20.7
	k8s.gcr.io/kube-controller-manager:v1.20.7
	k8s.gcr.io/kube-apiserver:v1.20.7
	k8s.gcr.io/kube-scheduler:v1.20.7
	gcr.io/k8s-minikube/storage-provisioner:v5
	kubernetesui/dashboard:v2.1.0
	k8s.gcr.io/etcd:3.4.13-0
	k8s.gcr.io/coredns:1.7.0
	kubernetesui/metrics-scraper:v1.0.4
	k8s.gcr.io/pause:3.2
	
	-- /stdout --
	I0609 01:40:44.461803  352096 docker.go:466] Images already preloaded, skipping extraction
	I0609 01:40:44.461856  352096 ssh_runner.go:149] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0609 01:40:44.498947  352096 docker.go:535] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-proxy:v1.20.7
	k8s.gcr.io/kube-controller-manager:v1.20.7
	k8s.gcr.io/kube-apiserver:v1.20.7
	k8s.gcr.io/kube-scheduler:v1.20.7
	gcr.io/k8s-minikube/storage-provisioner:v5
	kubernetesui/dashboard:v2.1.0
	k8s.gcr.io/etcd:3.4.13-0
	k8s.gcr.io/coredns:1.7.0
	kubernetesui/metrics-scraper:v1.0.4
	k8s.gcr.io/pause:3.2
	
	-- /stdout --
	I0609 01:40:44.498975  352096 cache_images.go:74] Images are preloaded, skipping loading
	I0609 01:40:44.499029  352096 ssh_runner.go:149] Run: docker info --format {{.CgroupDriver}}
	I0609 01:40:44.584207  352096 cni.go:93] Creating CNI manager for "calico"
	I0609 01:40:44.584229  352096 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0609 01:40:44.584247  352096 kubeadm.go:153] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.58.2 APIServerPort:8443 KubernetesVersion:v1.20.7 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:calico-20210609012810-9941 NodeName:calico-20210609012810-9941 DNSDomain:cluster.local CRISocket:/var/run/dockershim.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.58.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NoTaintMaster:true NodeIP:192.168.58.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/mi
nikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[]}
	I0609 01:40:44.584403  352096 kubeadm.go:157] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.58.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/dockershim.sock
	  name: "calico-20210609012810-9941"
	  kubeletExtraArgs:
	    node-ip: 192.168.58.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.58.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.7
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	
	I0609 01:40:44.584487  352096 kubeadm.go:909] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.7/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=docker --hostname-override=calico-20210609012810-9941 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.58.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.7 ClusterName:calico-20210609012810-9941 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico NodeIP: NodePort:8443 NodeName:}
	I0609 01:40:44.584549  352096 ssh_runner.go:149] Run: sudo ls /var/lib/minikube/binaries/v1.20.7
	I0609 01:40:44.591407  352096 binaries.go:44] Found k8s binaries, skipping transfer
	I0609 01:40:44.591476  352096 ssh_runner.go:149] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0609 01:40:44.597626  352096 ssh_runner.go:316] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (373 bytes)
	I0609 01:40:44.609338  352096 ssh_runner.go:316] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0609 01:40:44.620431  352096 ssh_runner.go:316] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (1885 bytes)
	I0609 01:40:44.631725  352096 ssh_runner.go:149] Run: grep 192.168.58.2	control-plane.minikube.internal$ /etc/hosts
	I0609 01:40:44.634357  352096 ssh_runner.go:149] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.58.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0609 01:40:44.642326  352096 certs.go:52] Setting up /home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/profiles/calico-20210609012810-9941 for IP: 192.168.58.2
	I0609 01:40:44.642377  352096 certs.go:179] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/ca.key
	I0609 01:40:44.642394  352096 certs.go:179] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/proxy-client-ca.key
	I0609 01:40:44.642461  352096 certs.go:290] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/profiles/calico-20210609012810-9941/client.key
	I0609 01:40:44.642481  352096 certs.go:294] generating minikube signed cert: /home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/profiles/calico-20210609012810-9941/apiserver.key.cee25041
	I0609 01:40:44.642488  352096 crypto.go:69] Generating cert /home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/profiles/calico-20210609012810-9941/apiserver.crt.cee25041 with IP's: [192.168.58.2 10.96.0.1 127.0.0.1 10.0.0.1]
	I0609 01:40:44.840681  352096 crypto.go:157] Writing cert to /home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/profiles/calico-20210609012810-9941/apiserver.crt.cee25041 ...
	I0609 01:40:44.840717  352096 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/profiles/calico-20210609012810-9941/apiserver.crt.cee25041: {Name:mkfc84e07035095def340a1ef0c06b8c2f56c745 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0609 01:40:44.840897  352096 crypto.go:165] Writing key to /home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/profiles/calico-20210609012810-9941/apiserver.key.cee25041 ...
	I0609 01:40:44.840910  352096 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/profiles/calico-20210609012810-9941/apiserver.key.cee25041: {Name:mk3b1eccc9f0abe0f237561b0ecff13d04e9dd19 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0609 01:40:44.840989  352096 certs.go:305] copying /home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/profiles/calico-20210609012810-9941/apiserver.crt.cee25041 -> /home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/profiles/calico-20210609012810-9941/apiserver.crt
	I0609 01:40:44.841051  352096 certs.go:309] copying /home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/profiles/calico-20210609012810-9941/apiserver.key.cee25041 -> /home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/profiles/calico-20210609012810-9941/apiserver.key
	I0609 01:40:44.841102  352096 certs.go:294] generating aggregator signed cert: /home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/profiles/calico-20210609012810-9941/proxy-client.key
	I0609 01:40:44.841112  352096 crypto.go:69] Generating cert /home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/profiles/calico-20210609012810-9941/proxy-client.crt with IP's: []
	I0609 01:40:44.915955  352096 crypto.go:157] Writing cert to /home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/profiles/calico-20210609012810-9941/proxy-client.crt ...
	I0609 01:40:44.915989  352096 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/profiles/calico-20210609012810-9941/proxy-client.crt: {Name:mkf48058b2fd1c7451a636bd94c7654745c05033 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0609 01:40:44.916188  352096 crypto.go:165] Writing key to /home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/profiles/calico-20210609012810-9941/proxy-client.key ...
	I0609 01:40:44.916206  352096 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/profiles/calico-20210609012810-9941/proxy-client.key: {Name:mke09647dda418d05401ddeb31cf7b4c662417a7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0609 01:40:44.916415  352096 certs.go:369] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/certs/9941.pem (1338 bytes)
	W0609 01:40:44.916467  352096 certs.go:365] ignoring /home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/certs/9941_empty.pem, impossibly tiny 0 bytes
	I0609 01:40:44.916486  352096 certs.go:369] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/certs/ca-key.pem (1675 bytes)
	I0609 01:40:44.916523  352096 certs.go:369] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/certs/ca.pem (1082 bytes)
	I0609 01:40:44.916559  352096 certs.go:369] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/certs/cert.pem (1123 bytes)
	I0609 01:40:44.916590  352096 certs.go:369] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/certs/key.pem (1679 bytes)
	I0609 01:40:44.917800  352096 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/profiles/calico-20210609012810-9941/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0609 01:40:44.937170  352096 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/profiles/calico-20210609012810-9941/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0609 01:40:44.956373  352096 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/profiles/calico-20210609012810-9941/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0609 01:40:44.974933  352096 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/profiles/calico-20210609012810-9941/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0609 01:40:44.991731  352096 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0609 01:40:45.008489  352096 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0609 01:40:45.031606  352096 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0609 01:40:45.047895  352096 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0609 01:40:45.064667  352096 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/certs/9941.pem --> /usr/share/ca-certificates/9941.pem (1338 bytes)
	I0609 01:40:45.080936  352096 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0609 01:40:45.096059  352096 ssh_runner.go:316] scp memory --> /var/lib/minikube/kubeconfig (752 bytes)
	I0609 01:40:45.107015  352096 ssh_runner.go:149] Run: openssl version
	I0609 01:40:45.111407  352096 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0609 01:40:45.119189  352096 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0609 01:40:45.121891  352096 certs.go:410] hashing: -rw-r--r-- 1 root root 1111 Jun  9 00:58 /usr/share/ca-certificates/minikubeCA.pem
	I0609 01:40:45.121925  352096 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0609 01:40:45.126118  352096 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0609 01:40:45.132551  352096 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/9941.pem && ln -fs /usr/share/ca-certificates/9941.pem /etc/ssl/certs/9941.pem"
	I0609 01:40:45.138926  352096 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/9941.pem
	I0609 01:40:45.141619  352096 certs.go:410] hashing: -rw-r--r-- 1 root root 1338 Jun  9 01:04 /usr/share/ca-certificates/9941.pem
	I0609 01:40:45.141657  352096 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/9941.pem
	I0609 01:40:45.145814  352096 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/9941.pem /etc/ssl/certs/51391683.0"
	I0609 01:40:45.152149  352096 kubeadm.go:390] StartCluster: {Name:calico-20210609012810-9941 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.23@sha256:baf6d94b2050bcbecd98994e265cf965a4f4768978620ccf5227a6dcb75ade45 Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.7 ClusterName:calico-20210609012810-9941 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNS
Domain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:calico NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.58.2 Port:8443 KubernetesVersion:v1.20.7 ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:5m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false}
	I0609 01:40:45.152257  352096 ssh_runner.go:149] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0609 01:40:45.187288  352096 ssh_runner.go:149] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0609 01:40:45.193888  352096 ssh_runner.go:149] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0609 01:40:45.201487  352096 kubeadm.go:220] ignoring SystemVerification for kubeadm because of docker driver
	I0609 01:40:45.201538  352096 ssh_runner.go:149] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0609 01:40:45.207661  352096 kubeadm.go:151] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0609 01:40:45.207713  352096 ssh_runner.go:240] Start: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.20.7:$PATH kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0609 01:40:43.186787  329232 cli_runner.go:115] Run: docker container inspect auto-20210609012809-9941 --format={{.State.Status}}
	I0609 01:40:46.229769  329232 cli_runner.go:115] Run: docker container inspect auto-20210609012809-9941 --format={{.State.Status}}
	I0609 01:40:45.365532  300573 pod_ready.go:102] pod "coredns-fb8b8dccf-ctgrx" in "kube-system" namespace has status "Ready":"False"
	I0609 01:40:45.492939  344705 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.20.7/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0609 01:40:45.992622  344705 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.20.7/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0609 01:40:46.493059  344705 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.20.7/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0609 01:40:46.992661  344705 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.20.7/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0609 01:40:48.750771  344705 ssh_runner.go:189] Completed: sudo /var/lib/minikube/binaries/v1.20.7/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig: (1.758074457s)
	I0609 01:40:48.993021  344705 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.20.7/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0609 01:40:49.269941  329232 cli_runner.go:115] Run: docker container inspect auto-20210609012809-9941 --format={{.State.Status}}
	I0609 01:40:52.311061  329232 cli_runner.go:115] Run: docker container inspect auto-20210609012809-9941 --format={{.State.Status}}
	I0609 01:40:51.493556  344705 ssh_runner.go:189] Completed: sudo /var/lib/minikube/binaries/v1.20.7/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig: (2.500498227s)
	I0609 01:40:51.992230  344705 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.20.7/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0609 01:40:52.180627  344705 kubeadm.go:985] duration metric: took 19.939502771s to wait for elevateKubeSystemPrivileges.
	I0609 01:40:52.180659  344705 kubeadm.go:392] StartCluster complete in 33.745162361s
	I0609 01:40:52.180680  344705 settings.go:142] acquiring lock: {Name:mk8746ecf7d8ca6a3508d1e45e55db2314c0e73d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0609 01:40:52.180766  344705 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/kubeconfig
	I0609 01:40:52.182512  344705 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/kubeconfig: {Name:mk288d2c4fafd90028bf76db1824dfec28d92db3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0609 01:40:52.757936  344705 kapi.go:244] deployment "coredns" in namespace "kube-system" and context "cilium-20210609012810-9941" rescaled to 1
	I0609 01:40:52.758013  344705 start.go:214] Will wait 5m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.20.7 ControlPlane:true Worker:true}
	I0609 01:40:52.759853  344705 out.go:170] * Verifying Kubernetes components...
	I0609 01:40:52.758135  344705 ssh_runner.go:149] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.7/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0609 01:40:52.759935  344705 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service kubelet
	I0609 01:40:52.758167  344705 addons.go:342] enableAddons start: toEnable=map[], additional=[]
	I0609 01:40:52.760010  344705 addons.go:59] Setting storage-provisioner=true in profile "cilium-20210609012810-9941"
	I0609 01:40:52.758404  344705 cache.go:108] acquiring lock: {Name:mk2dd9808d496cd84c38482eea9e354a60be2885 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0609 01:40:52.760030  344705 addons.go:59] Setting default-storageclass=true in profile "cilium-20210609012810-9941"
	I0609 01:40:52.760049  344705 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "cilium-20210609012810-9941"
	I0609 01:40:52.760062  344705 addons.go:135] Setting addon storage-provisioner=true in "cilium-20210609012810-9941"
	W0609 01:40:52.760082  344705 addons.go:147] addon storage-provisioner should already be in state true
	I0609 01:40:52.760090  344705 cache.go:116] /home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/cache/images/minikube-local-cache-test_functional-20210609010438-9941 exists
	I0609 01:40:52.760113  344705 host.go:66] Checking if "cilium-20210609012810-9941" exists ...
	I0609 01:40:52.760111  344705 cache.go:97] cache image "minikube-local-cache-test:functional-20210609010438-9941" -> "/home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/cache/images/minikube-local-cache-test_functional-20210609010438-9941" took 1.718093ms
	I0609 01:40:52.760126  344705 cache.go:81] save to tar file minikube-local-cache-test:functional-20210609010438-9941 -> /home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/cache/images/minikube-local-cache-test_functional-20210609010438-9941 succeeded
	I0609 01:40:52.760140  344705 cache.go:88] Successfully saved all images to host disk.
	I0609 01:40:52.760541  344705 cli_runner.go:115] Run: docker container inspect cilium-20210609012810-9941 --format={{.State.Status}}
	I0609 01:40:52.760709  344705 cli_runner.go:115] Run: docker container inspect cilium-20210609012810-9941 --format={{.State.Status}}
	I0609 01:40:52.761714  344705 cli_runner.go:115] Run: docker container inspect cilium-20210609012810-9941 --format={{.State.Status}}
	I0609 01:40:50.469695  300573 pod_ready.go:92] pod "coredns-fb8b8dccf-ctgrx" in "kube-system" namespace has status "Ready":"True"
	I0609 01:40:50.469731  300573 pod_ready.go:81] duration metric: took 16.612054385s waiting for pod "coredns-fb8b8dccf-ctgrx" in "kube-system" namespace to be "Ready" ...
	I0609 01:40:50.469746  300573 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-97rr9" in "kube-system" namespace to be "Ready" ...
	I0609 01:40:51.488708  300573 pod_ready.go:92] pod "kube-proxy-97rr9" in "kube-system" namespace has status "Ready":"True"
	I0609 01:40:51.488734  300573 pod_ready.go:81] duration metric: took 1.018979544s waiting for pod "kube-proxy-97rr9" in "kube-system" namespace to be "Ready" ...
	I0609 01:40:51.488744  300573 pod_ready.go:38] duration metric: took 17.633659357s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0609 01:40:51.488765  300573 api_server.go:50] waiting for apiserver process to appear ...
	I0609 01:40:51.488807  300573 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0609 01:40:51.520972  300573 api_server.go:70] duration metric: took 17.937884491s to wait for apiserver process to appear ...
	I0609 01:40:51.520999  300573 api_server.go:86] waiting for apiserver healthz status ...
	I0609 01:40:51.521011  300573 api_server.go:223] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I0609 01:40:51.525448  300573 api_server.go:249] https://192.168.67.2:8443/healthz returned 200:
	ok
	I0609 01:40:51.526192  300573 api_server.go:139] control plane version: v1.14.0
	I0609 01:40:51.526211  300573 api_server.go:129] duration metric: took 5.206469ms to wait for apiserver health ...
	I0609 01:40:51.526219  300573 system_pods.go:43] waiting for kube-system pods to appear ...
	I0609 01:40:51.528829  300573 system_pods.go:59] 4 kube-system pods found
	I0609 01:40:51.528851  300573 system_pods.go:61] "coredns-fb8b8dccf-ctgrx" [adffdf7a-c8c3-11eb-a78f-02427f02d9a2] Running
	I0609 01:40:51.528856  300573 system_pods.go:61] "kube-proxy-97rr9" [ae0f0e5a-c8c3-11eb-a78f-02427f02d9a2] Running
	I0609 01:40:51.528865  300573 system_pods.go:61] "metrics-server-8546d8b77b-lqx7b" [afea2287-c8c3-11eb-a78f-02427f02d9a2] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0609 01:40:51.528871  300573 system_pods.go:61] "storage-provisioner" [af32324c-c8c3-11eb-a78f-02427f02d9a2] Running
	I0609 01:40:51.528887  300573 system_pods.go:74] duration metric: took 2.66306ms to wait for pod list to return data ...
	I0609 01:40:51.528896  300573 default_sa.go:34] waiting for default service account to be created ...
	I0609 01:40:51.531122  300573 default_sa.go:45] found service account: "default"
	I0609 01:40:51.531139  300573 default_sa.go:55] duration metric: took 2.23539ms for default service account to be created ...
	I0609 01:40:51.531146  300573 system_pods.go:116] waiting for k8s-apps to be running ...
	I0609 01:40:51.536460  300573 system_pods.go:86] 4 kube-system pods found
	I0609 01:40:51.536487  300573 system_pods.go:89] "coredns-fb8b8dccf-ctgrx" [adffdf7a-c8c3-11eb-a78f-02427f02d9a2] Running
	I0609 01:40:51.536494  300573 system_pods.go:89] "kube-proxy-97rr9" [ae0f0e5a-c8c3-11eb-a78f-02427f02d9a2] Running
	I0609 01:40:51.536504  300573 system_pods.go:89] "metrics-server-8546d8b77b-lqx7b" [afea2287-c8c3-11eb-a78f-02427f02d9a2] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0609 01:40:51.536517  300573 system_pods.go:89] "storage-provisioner" [af32324c-c8c3-11eb-a78f-02427f02d9a2] Running
	I0609 01:40:51.536541  300573 retry.go:31] will retry after 214.282984ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0609 01:40:51.755301  300573 system_pods.go:86] 4 kube-system pods found
	I0609 01:40:51.755331  300573 system_pods.go:89] "coredns-fb8b8dccf-ctgrx" [adffdf7a-c8c3-11eb-a78f-02427f02d9a2] Running
	I0609 01:40:51.755339  300573 system_pods.go:89] "kube-proxy-97rr9" [ae0f0e5a-c8c3-11eb-a78f-02427f02d9a2] Running
	I0609 01:40:51.755348  300573 system_pods.go:89] "metrics-server-8546d8b77b-lqx7b" [afea2287-c8c3-11eb-a78f-02427f02d9a2] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0609 01:40:51.755355  300573 system_pods.go:89] "storage-provisioner" [af32324c-c8c3-11eb-a78f-02427f02d9a2] Running
	I0609 01:40:51.755369  300573 retry.go:31] will retry after 293.852686ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0609 01:40:52.053824  300573 system_pods.go:86] 4 kube-system pods found
	I0609 01:40:52.053857  300573 system_pods.go:89] "coredns-fb8b8dccf-ctgrx" [adffdf7a-c8c3-11eb-a78f-02427f02d9a2] Running
	I0609 01:40:52.053865  300573 system_pods.go:89] "kube-proxy-97rr9" [ae0f0e5a-c8c3-11eb-a78f-02427f02d9a2] Running
	I0609 01:40:52.053880  300573 system_pods.go:89] "metrics-server-8546d8b77b-lqx7b" [afea2287-c8c3-11eb-a78f-02427f02d9a2] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0609 01:40:52.053892  300573 system_pods.go:89] "storage-provisioner" [af32324c-c8c3-11eb-a78f-02427f02d9a2] Running
	I0609 01:40:52.053908  300573 retry.go:31] will retry after 355.089487ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0609 01:40:52.413227  300573 system_pods.go:86] 4 kube-system pods found
	I0609 01:40:52.413262  300573 system_pods.go:89] "coredns-fb8b8dccf-ctgrx" [adffdf7a-c8c3-11eb-a78f-02427f02d9a2] Running
	I0609 01:40:52.413272  300573 system_pods.go:89] "kube-proxy-97rr9" [ae0f0e5a-c8c3-11eb-a78f-02427f02d9a2] Running
	I0609 01:40:52.413282  300573 system_pods.go:89] "metrics-server-8546d8b77b-lqx7b" [afea2287-c8c3-11eb-a78f-02427f02d9a2] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0609 01:40:52.413289  300573 system_pods.go:89] "storage-provisioner" [af32324c-c8c3-11eb-a78f-02427f02d9a2] Running
	I0609 01:40:52.413304  300573 retry.go:31] will retry after 480.685997ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0609 01:40:52.898013  300573 system_pods.go:86] 4 kube-system pods found
	I0609 01:40:52.898051  300573 system_pods.go:89] "coredns-fb8b8dccf-ctgrx" [adffdf7a-c8c3-11eb-a78f-02427f02d9a2] Running
	I0609 01:40:52.898059  300573 system_pods.go:89] "kube-proxy-97rr9" [ae0f0e5a-c8c3-11eb-a78f-02427f02d9a2] Running
	I0609 01:40:52.898071  300573 system_pods.go:89] "metrics-server-8546d8b77b-lqx7b" [afea2287-c8c3-11eb-a78f-02427f02d9a2] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0609 01:40:52.898078  300573 system_pods.go:89] "storage-provisioner" [af32324c-c8c3-11eb-a78f-02427f02d9a2] Running
	I0609 01:40:52.898093  300573 retry.go:31] will retry after 544.138839ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0609 01:40:53.446671  300573 system_pods.go:86] 4 kube-system pods found
	I0609 01:40:53.446706  300573 system_pods.go:89] "coredns-fb8b8dccf-ctgrx" [adffdf7a-c8c3-11eb-a78f-02427f02d9a2] Running
	I0609 01:40:53.446713  300573 system_pods.go:89] "kube-proxy-97rr9" [ae0f0e5a-c8c3-11eb-a78f-02427f02d9a2] Running
	I0609 01:40:53.446722  300573 system_pods.go:89] "metrics-server-8546d8b77b-lqx7b" [afea2287-c8c3-11eb-a78f-02427f02d9a2] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0609 01:40:53.446728  300573 system_pods.go:89] "storage-provisioner" [af32324c-c8c3-11eb-a78f-02427f02d9a2] Running
	I0609 01:40:53.446742  300573 retry.go:31] will retry after 684.014074ms: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0609 01:40:52.840705  344705 out.go:170]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0609 01:40:52.840860  344705 addons.go:275] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0609 01:40:52.840873  344705 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0609 01:40:52.840938  344705 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-20210609012810-9941
	I0609 01:40:52.820388  344705 ssh_runner.go:149] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0609 01:40:52.841301  344705 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-20210609012810-9941
	I0609 01:40:52.823016  344705 addons.go:135] Setting addon default-storageclass=true in "cilium-20210609012810-9941"
	W0609 01:40:52.841379  344705 addons.go:147] addon default-storageclass should already be in state true
	I0609 01:40:52.841434  344705 host.go:66] Checking if "cilium-20210609012810-9941" exists ...
	I0609 01:40:52.841999  344705 cli_runner.go:115] Run: docker container inspect cilium-20210609012810-9941 --format={{.State.Status}}
	I0609 01:40:52.875619  344705 ssh_runner.go:149] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.7/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' | sudo /var/lib/minikube/binaries/v1.20.7/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0609 01:40:52.878520  344705 node_ready.go:35] waiting up to 5m0s for node "cilium-20210609012810-9941" to be "Ready" ...
	I0609 01:40:52.883106  344705 node_ready.go:49] node "cilium-20210609012810-9941" has status "Ready":"True"
	I0609 01:40:52.883125  344705 node_ready.go:38] duration metric: took 4.566542ms waiting for node "cilium-20210609012810-9941" to be "Ready" ...
	I0609 01:40:52.883135  344705 pod_ready.go:35] extra waiting up to 5m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0609 01:40:52.901282  344705 pod_ready.go:78] waiting up to 5m0s for pod "cilium-2rdhk" in "kube-system" namespace to be "Ready" ...
	I0609 01:40:52.905753  344705 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32980 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/machines/cilium-20210609012810-9941/id_rsa Username:docker}
	I0609 01:40:52.913698  344705 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32980 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/machines/cilium-20210609012810-9941/id_rsa Username:docker}
	I0609 01:40:52.924428  344705 addons.go:275] installing /etc/kubernetes/addons/storageclass.yaml
	I0609 01:40:52.924451  344705 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0609 01:40:52.924507  344705 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" cilium-20210609012810-9941
	I0609 01:40:52.985429  344705 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32980 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/machines/cilium-20210609012810-9941/id_rsa Username:docker}
	I0609 01:40:53.093158  344705 ssh_runner.go:149] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.7/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0609 01:40:53.182043  344705 ssh_runner.go:149] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.7/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0609 01:40:53.354533  344705 start.go:725] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS
	I0609 01:40:53.354610  344705 docker.go:535] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-proxy:v1.20.7
	k8s.gcr.io/kube-apiserver:v1.20.7
	k8s.gcr.io/kube-controller-manager:v1.20.7
	k8s.gcr.io/kube-scheduler:v1.20.7
	gcr.io/k8s-minikube/storage-provisioner:v5
	kubernetesui/dashboard:v2.1.0
	k8s.gcr.io/etcd:3.4.13-0
	k8s.gcr.io/coredns:1.7.0
	kubernetesui/metrics-scraper:v1.0.4
	k8s.gcr.io/pause:3.2
	
	-- /stdout --
	I0609 01:40:53.354626  344705 docker.go:541] minikube-local-cache-test:functional-20210609010438-9941 wasn't preloaded
	I0609 01:40:53.354641  344705 cache_images.go:78] LoadImages start: [minikube-local-cache-test:functional-20210609010438-9941]
	I0609 01:40:53.355651  344705 image.go:133] retrieving image: minikube-local-cache-test:functional-20210609010438-9941
	I0609 01:40:53.355676  344705 image.go:139] checking repository: index.docker.io/library/minikube-local-cache-test
	I0609 01:40:53.588602  344705 out.go:170] * Enabled addons: storage-provisioner, default-storageclass
	I0609 01:40:53.588639  344705 addons.go:344] enableAddons completed in 830.486904ms
	W0609 01:40:54.204447  344705 image.go:146] remote: HEAD https://index.docker.io/v2/library/minikube-local-cache-test/manifests/functional-20210609010438-9941: unexpected status code 401 Unauthorized (HEAD responses have no body, use GET for details)
	I0609 01:40:54.204502  344705 image.go:147] short name: minikube-local-cache-test:functional-20210609010438-9941
	I0609 01:40:54.205330  344705 image.go:175] daemon lookup for minikube-local-cache-test:functional-20210609010438-9941: Error response from daemon: reference does not exist
	W0609 01:40:54.817533  344705 image.go:185] authn lookup for minikube-local-cache-test:functional-20210609010438-9941 (trying anon): GET https://index.docker.io/v2/library/minikube-local-cache-test/manifests/functional-20210609010438-9941: UNAUTHORIZED: authentication required; [map[Action:pull Class: Name:library/minikube-local-cache-test Type:repository]]
	I0609 01:40:54.940307  344705 pod_ready.go:102] pod "cilium-2rdhk" in "kube-system" namespace has status "Ready":"False"
	I0609 01:40:55.379843  329232 cli_runner.go:115] Run: docker container inspect auto-20210609012809-9941 --format={{.State.Status}}
	I0609 01:40:54.134198  300573 system_pods.go:86] 4 kube-system pods found
	I0609 01:40:54.134226  300573 system_pods.go:89] "coredns-fb8b8dccf-ctgrx" [adffdf7a-c8c3-11eb-a78f-02427f02d9a2] Running
	I0609 01:40:54.134231  300573 system_pods.go:89] "kube-proxy-97rr9" [ae0f0e5a-c8c3-11eb-a78f-02427f02d9a2] Running
	I0609 01:40:54.134238  300573 system_pods.go:89] "metrics-server-8546d8b77b-lqx7b" [afea2287-c8c3-11eb-a78f-02427f02d9a2] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0609 01:40:54.134242  300573 system_pods.go:89] "storage-provisioner" [af32324c-c8c3-11eb-a78f-02427f02d9a2] Running
	I0609 01:40:54.134254  300573 retry.go:31] will retry after 1.039068543s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0609 01:40:55.178626  300573 system_pods.go:86] 4 kube-system pods found
	I0609 01:40:55.178662  300573 system_pods.go:89] "coredns-fb8b8dccf-ctgrx" [adffdf7a-c8c3-11eb-a78f-02427f02d9a2] Running
	I0609 01:40:55.178669  300573 system_pods.go:89] "kube-proxy-97rr9" [ae0f0e5a-c8c3-11eb-a78f-02427f02d9a2] Running
	I0609 01:40:55.178679  300573 system_pods.go:89] "metrics-server-8546d8b77b-lqx7b" [afea2287-c8c3-11eb-a78f-02427f02d9a2] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0609 01:40:55.178691  300573 system_pods.go:89] "storage-provisioner" [af32324c-c8c3-11eb-a78f-02427f02d9a2] Running
	I0609 01:40:55.178707  300573 retry.go:31] will retry after 1.02433744s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0609 01:40:56.206796  300573 system_pods.go:86] 4 kube-system pods found
	I0609 01:40:56.206822  300573 system_pods.go:89] "coredns-fb8b8dccf-ctgrx" [adffdf7a-c8c3-11eb-a78f-02427f02d9a2] Running
	I0609 01:40:56.206828  300573 system_pods.go:89] "kube-proxy-97rr9" [ae0f0e5a-c8c3-11eb-a78f-02427f02d9a2] Running
	I0609 01:40:56.206835  300573 system_pods.go:89] "metrics-server-8546d8b77b-lqx7b" [afea2287-c8c3-11eb-a78f-02427f02d9a2] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0609 01:40:56.206839  300573 system_pods.go:89] "storage-provisioner" [af32324c-c8c3-11eb-a78f-02427f02d9a2] Running
	I0609 01:40:56.206851  300573 retry.go:31] will retry after 1.268973106s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0609 01:40:57.480720  300573 system_pods.go:86] 4 kube-system pods found
	I0609 01:40:57.480751  300573 system_pods.go:89] "coredns-fb8b8dccf-ctgrx" [adffdf7a-c8c3-11eb-a78f-02427f02d9a2] Running
	I0609 01:40:57.480759  300573 system_pods.go:89] "kube-proxy-97rr9" [ae0f0e5a-c8c3-11eb-a78f-02427f02d9a2] Running
	I0609 01:40:57.480771  300573 system_pods.go:89] "metrics-server-8546d8b77b-lqx7b" [afea2287-c8c3-11eb-a78f-02427f02d9a2] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0609 01:40:57.480778  300573 system_pods.go:89] "storage-provisioner" [af32324c-c8c3-11eb-a78f-02427f02d9a2] Running
	I0609 01:40:57.480796  300573 retry.go:31] will retry after 1.733071555s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0609 01:40:55.410467  344705 image.go:189] remote lookup for minikube-local-cache-test:functional-20210609010438-9941: GET https://index.docker.io/v2/library/minikube-local-cache-test/manifests/functional-20210609010438-9941: UNAUTHORIZED: authentication required; [map[Action:pull Class: Name:library/minikube-local-cache-test Type:repository]]
	I0609 01:40:55.410515  344705 image.go:92] error retrieve Image minikube-local-cache-test:functional-20210609010438-9941 ref GET https://index.docker.io/v2/library/minikube-local-cache-test/manifests/functional-20210609010438-9941: UNAUTHORIZED: authentication required; [map[Action:pull Class: Name:library/minikube-local-cache-test Type:repository]] 
	I0609 01:40:55.410544  344705 cache_images.go:106] "minikube-local-cache-test:functional-20210609010438-9941" needs transfer: got empty img digest "" for minikube-local-cache-test:functional-20210609010438-9941
	I0609 01:40:55.410583  344705 docker.go:236] Removing image: minikube-local-cache-test:functional-20210609010438-9941
	I0609 01:40:55.410638  344705 ssh_runner.go:149] Run: docker rmi minikube-local-cache-test:functional-20210609010438-9941
	I0609 01:40:55.448411  344705 cache_images.go:279] Loading image from: /home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/cache/images/minikube-local-cache-test_functional-20210609010438-9941
	I0609 01:40:55.448506  344705 ssh_runner.go:149] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/minikube-local-cache-test_functional-20210609010438-9941
	I0609 01:40:55.451714  344705 ssh_runner.go:306] existence check for /var/lib/minikube/images/minikube-local-cache-test_functional-20210609010438-9941: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/minikube-local-cache-test_functional-20210609010438-9941: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot stat '/var/lib/minikube/images/minikube-local-cache-test_functional-20210609010438-9941': No such file or directory
	I0609 01:40:55.451745  344705 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/cache/images/minikube-local-cache-test_functional-20210609010438-9941 --> /var/lib/minikube/images/minikube-local-cache-test_functional-20210609010438-9941 (5120 bytes)
	I0609 01:40:55.471575  344705 docker.go:203] Loading image: /var/lib/minikube/images/minikube-local-cache-test_functional-20210609010438-9941
	I0609 01:40:55.471628  344705 ssh_runner.go:149] Run: docker load -i /var/lib/minikube/images/minikube-local-cache-test_functional-20210609010438-9941
	I0609 01:40:55.762458  344705 cache_images.go:308] Transferred and loaded /home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/cache/images/minikube-local-cache-test_functional-20210609010438-9941 from cache
	I0609 01:40:55.762495  344705 cache_images.go:113] Successfully loaded all cached images
	I0609 01:40:55.762502  344705 cache_images.go:82] LoadImages completed in 2.407848633s
	I0609 01:40:55.762517  344705 cache_images.go:252] succeeded pushing to: cilium-20210609012810-9941
	I0609 01:40:55.762522  344705 cache_images.go:253] failed pushing to: 
	I0609 01:40:57.446509  344705 pod_ready.go:102] pod "cilium-2rdhk" in "kube-system" namespace has status "Ready":"False"
	I0609 01:40:59.919287  344705 pod_ready.go:102] pod "cilium-2rdhk" in "kube-system" namespace has status "Ready":"False"
	I0609 01:41:00.317663  352096 out.go:197]   - Generating certificates and keys ...
	I0609 01:41:00.320816  352096 out.go:197]   - Booting up control plane ...
	I0609 01:41:00.323612  352096 out.go:197]   - Configuring RBAC rules ...
	I0609 01:41:00.325728  352096 cni.go:93] Creating CNI manager for "calico"
	I0609 01:41:00.327397  352096 out.go:170] * Configuring Calico (Container Networking Interface) ...
	I0609 01:41:00.327463  352096 cni.go:187] applying CNI manifest using /var/lib/minikube/binaries/v1.20.7/kubectl ...
	I0609 01:41:00.327482  352096 ssh_runner.go:316] scp memory --> /var/tmp/minikube/cni.yaml (22544 bytes)
	I0609 01:41:00.355615  352096 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.20.7/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0609 01:41:01.345873  352096 ssh_runner.go:149] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0609 01:41:01.346015  352096 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.20.7/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0609 01:41:01.346096  352096 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.20.7/kubectl label nodes minikube.k8s.io/version=v1.21.0-beta.0 minikube.k8s.io/commit=d4601e4184ba947b7d077d7d426c2bca79bbf9fc minikube.k8s.io/name=calico-20210609012810-9941 minikube.k8s.io/updated_at=2021_06_09T01_41_01_0700 --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0609 01:40:58.423166  329232 cli_runner.go:115] Run: docker container inspect auto-20210609012809-9941 --format={{.State.Status}}
	I0609 01:41:01.474794  329232 cli_runner.go:115] Run: docker container inspect auto-20210609012809-9941 --format={{.State.Status}}
	I0609 01:40:59.218044  300573 system_pods.go:86] 4 kube-system pods found
	I0609 01:40:59.218071  300573 system_pods.go:89] "coredns-fb8b8dccf-ctgrx" [adffdf7a-c8c3-11eb-a78f-02427f02d9a2] Running
	I0609 01:40:59.218077  300573 system_pods.go:89] "kube-proxy-97rr9" [ae0f0e5a-c8c3-11eb-a78f-02427f02d9a2] Running
	I0609 01:40:59.218084  300573 system_pods.go:89] "metrics-server-8546d8b77b-lqx7b" [afea2287-c8c3-11eb-a78f-02427f02d9a2] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0609 01:40:59.218089  300573 system_pods.go:89] "storage-provisioner" [af32324c-c8c3-11eb-a78f-02427f02d9a2] Running
	I0609 01:40:59.218101  300573 retry.go:31] will retry after 2.410580953s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0609 01:41:01.632429  300573 system_pods.go:86] 4 kube-system pods found
	I0609 01:41:01.632456  300573 system_pods.go:89] "coredns-fb8b8dccf-ctgrx" [adffdf7a-c8c3-11eb-a78f-02427f02d9a2] Running
	I0609 01:41:01.632462  300573 system_pods.go:89] "kube-proxy-97rr9" [ae0f0e5a-c8c3-11eb-a78f-02427f02d9a2] Running
	I0609 01:41:01.632469  300573 system_pods.go:89] "metrics-server-8546d8b77b-lqx7b" [afea2287-c8c3-11eb-a78f-02427f02d9a2] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0609 01:41:01.632476  300573 system_pods.go:89] "storage-provisioner" [af32324c-c8c3-11eb-a78f-02427f02d9a2] Running
	I0609 01:41:01.632489  300573 retry.go:31] will retry after 3.437877504s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0609 01:41:02.460409  344705 pod_ready.go:102] pod "cilium-2rdhk" in "kube-system" namespace has status "Ready":"False"
	I0609 01:41:04.920306  344705 pod_ready.go:102] pod "cilium-2rdhk" in "kube-system" namespace has status "Ready":"False"
	I0609 01:41:01.767984  352096 ops.go:34] apiserver oom_adj: -16
	I0609 01:41:01.768084  352096 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.20.7/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0609 01:41:02.480180  352096 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.20.7/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0609 01:41:02.980220  352096 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.20.7/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0609 01:41:03.480904  352096 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.20.7/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0609 01:41:03.980208  352096 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.20.7/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0609 01:41:04.480690  352096 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.20.7/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0609 01:41:04.980710  352096 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.20.7/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0609 01:41:05.480647  352096 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.20.7/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0609 01:41:05.979985  352096 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.20.7/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0609 01:41:06.480212  352096 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.20.7/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0609 01:41:04.521744  329232 cli_runner.go:115] Run: docker container inspect auto-20210609012809-9941 --format={{.State.Status}}
	I0609 01:41:05.073834  300573 system_pods.go:86] 4 kube-system pods found
	I0609 01:41:05.073863  300573 system_pods.go:89] "coredns-fb8b8dccf-ctgrx" [adffdf7a-c8c3-11eb-a78f-02427f02d9a2] Running
	I0609 01:41:05.073868  300573 system_pods.go:89] "kube-proxy-97rr9" [ae0f0e5a-c8c3-11eb-a78f-02427f02d9a2] Running
	I0609 01:41:05.073876  300573 system_pods.go:89] "metrics-server-8546d8b77b-lqx7b" [afea2287-c8c3-11eb-a78f-02427f02d9a2] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0609 01:41:05.073881  300573 system_pods.go:89] "storage-provisioner" [af32324c-c8c3-11eb-a78f-02427f02d9a2] Running
	I0609 01:41:05.073895  300573 retry.go:31] will retry after 3.261655801s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0609 01:41:08.339005  300573 system_pods.go:86] 4 kube-system pods found
	I0609 01:41:08.339042  300573 system_pods.go:89] "coredns-fb8b8dccf-ctgrx" [adffdf7a-c8c3-11eb-a78f-02427f02d9a2] Running
	I0609 01:41:08.339049  300573 system_pods.go:89] "kube-proxy-97rr9" [ae0f0e5a-c8c3-11eb-a78f-02427f02d9a2] Running
	I0609 01:41:08.339061  300573 system_pods.go:89] "metrics-server-8546d8b77b-lqx7b" [afea2287-c8c3-11eb-a78f-02427f02d9a2] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0609 01:41:08.339067  300573 system_pods.go:89] "storage-provisioner" [af32324c-c8c3-11eb-a78f-02427f02d9a2] Running
	I0609 01:41:08.339081  300573 retry.go:31] will retry after 4.086092664s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0609 01:41:07.419175  344705 pod_ready.go:102] pod "cilium-2rdhk" in "kube-system" namespace has status "Ready":"False"
	I0609 01:41:09.443670  344705 pod_ready.go:102] pod "cilium-2rdhk" in "kube-system" namespace has status "Ready":"False"
	I0609 01:41:06.980032  352096 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.20.7/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0609 01:41:07.480282  352096 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.20.7/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0609 01:41:07.980274  352096 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.20.7/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0609 01:41:08.480263  352096 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.20.7/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0609 01:41:08.980571  352096 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.20.7/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0609 01:41:09.480813  352096 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.20.7/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0609 01:41:09.980588  352096 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.20.7/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0609 01:41:10.480840  352096 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.20.7/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0609 01:41:10.980186  352096 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.20.7/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0609 01:41:11.480965  352096 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.20.7/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0609 01:41:07.580079  329232 cli_runner.go:115] Run: docker container inspect auto-20210609012809-9941 --format={{.State.Status}}
	I0609 01:41:10.622741  329232 cli_runner.go:115] Run: docker container inspect auto-20210609012809-9941 --format={{.State.Status}}
	I0609 01:41:13.117286  300573 system_pods.go:86] 4 kube-system pods found
	I0609 01:41:13.117320  300573 system_pods.go:89] "coredns-fb8b8dccf-ctgrx" [adffdf7a-c8c3-11eb-a78f-02427f02d9a2] Running
	I0609 01:41:13.117328  300573 system_pods.go:89] "kube-proxy-97rr9" [ae0f0e5a-c8c3-11eb-a78f-02427f02d9a2] Running
	I0609 01:41:13.117340  300573 system_pods.go:89] "metrics-server-8546d8b77b-lqx7b" [afea2287-c8c3-11eb-a78f-02427f02d9a2] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0609 01:41:13.117348  300573 system_pods.go:89] "storage-provisioner" [af32324c-c8c3-11eb-a78f-02427f02d9a2] Running
	I0609 01:41:13.117364  300573 retry.go:31] will retry after 6.402197611s: missing components: etcd, kube-apiserver, kube-controller-manager, kube-scheduler
	I0609 01:41:13.726560  344705 pod_ready.go:102] pod "cilium-2rdhk" in "kube-system" namespace has status "Ready":"False"
	I0609 01:41:11.980058  352096 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.20.7/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0609 01:41:13.480528  352096 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.20.7/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0609 01:41:13.980786  352096 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.20.7/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0609 01:41:15.479870  352096 ssh_runner.go:189] Completed: sudo /var/lib/minikube/binaries/v1.20.7/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig: (1.499049149s)
	I0609 01:41:15.479969  352096 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.20.7/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0609 01:41:16.480635  352096 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.20.7/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0609 01:41:13.666259  329232 cli_runner.go:115] Run: docker container inspect auto-20210609012809-9941 --format={{.State.Status}}
	I0609 01:41:16.715529  329232 cli_runner.go:115] Run: docker container inspect auto-20210609012809-9941 --format={{.State.Status}}
	I0609 01:41:16.980322  352096 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.20.7/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0609 01:41:17.480064  352096 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.20.7/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0609 01:41:17.980779  352096 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.20.7/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0609 01:41:18.071429  352096 kubeadm.go:985] duration metric: took 16.725453565s to wait for elevateKubeSystemPrivileges.
	I0609 01:41:18.071462  352096 kubeadm.go:392] StartCluster complete in 32.919320287s
	I0609 01:41:18.071483  352096 settings.go:142] acquiring lock: {Name:mk8746ecf7d8ca6a3508d1e45e55db2314c0e73d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0609 01:41:18.071570  352096 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/kubeconfig
	I0609 01:41:18.073757  352096 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/kubeconfig: {Name:mk288d2c4fafd90028bf76db1824dfec28d92db3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0609 01:41:18.664569  352096 kapi.go:244] deployment "coredns" in namespace "kube-system" and context "calico-20210609012810-9941" rescaled to 1
	I0609 01:41:18.664632  352096 start.go:214] Will wait 5m0s for node &{Name: IP:192.168.58.2 Port:8443 KubernetesVersion:v1.20.7 ControlPlane:true Worker:true}
	I0609 01:41:18.664651  352096 ssh_runner.go:149] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.7/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0609 01:41:18.664714  352096 addons.go:342] enableAddons start: toEnable=map[], additional=[]
	I0609 01:41:18.666538  352096 out.go:170] * Verifying Kubernetes components...
	I0609 01:41:18.664779  352096 addons.go:59] Setting storage-provisioner=true in profile "calico-20210609012810-9941"
	I0609 01:41:18.666596  352096 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service kubelet
	I0609 01:41:18.666612  352096 addons.go:135] Setting addon storage-provisioner=true in "calico-20210609012810-9941"
	W0609 01:41:18.666630  352096 addons.go:147] addon storage-provisioner should already be in state true
	I0609 01:41:18.664791  352096 addons.go:59] Setting default-storageclass=true in profile "calico-20210609012810-9941"
	I0609 01:41:18.666671  352096 host.go:66] Checking if "calico-20210609012810-9941" exists ...
	I0609 01:41:18.666676  352096 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "calico-20210609012810-9941"
	I0609 01:41:18.664965  352096 cache.go:108] acquiring lock: {Name:mk2dd9808d496cd84c38482eea9e354a60be2885 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0609 01:41:18.666833  352096 cache.go:116] /home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/cache/images/minikube-local-cache-test_functional-20210609010438-9941 exists
	I0609 01:41:18.666855  352096 cache.go:97] cache image "minikube-local-cache-test:functional-20210609010438-9941" -> "/home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/cache/images/minikube-local-cache-test_functional-20210609010438-9941" took 1.89821ms
	I0609 01:41:18.666869  352096 cache.go:81] save to tar file minikube-local-cache-test:functional-20210609010438-9941 -> /home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/cache/images/minikube-local-cache-test_functional-20210609010438-9941 succeeded
	I0609 01:41:18.666879  352096 cache.go:88] Successfully saved all images to host disk.
	I0609 01:41:18.667046  352096 cli_runner.go:115] Run: docker container inspect calico-20210609012810-9941 --format={{.State.Status}}
	I0609 01:41:18.667251  352096 cli_runner.go:115] Run: docker container inspect calico-20210609012810-9941 --format={{.State.Status}}
	I0609 01:41:18.667265  352096 cli_runner.go:115] Run: docker container inspect calico-20210609012810-9941 --format={{.State.Status}}
	I0609 01:41:18.711328  352096 ssh_runner.go:149] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0609 01:41:18.711376  352096 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20210609012810-9941
	I0609 01:41:16.464152  344705 pod_ready.go:102] pod "cilium-2rdhk" in "kube-system" namespace has status "Ready":"False"
	I0609 01:41:18.919739  344705 pod_ready.go:102] pod "cilium-2rdhk" in "kube-system" namespace has status "Ready":"False"
	I0609 01:41:18.722674  352096 out.go:170]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0609 01:41:18.722788  352096 addons.go:275] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0609 01:41:18.722802  352096 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0609 01:41:18.722851  352096 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20210609012810-9941
	I0609 01:41:18.758518  352096 addons.go:135] Setting addon default-storageclass=true in "calico-20210609012810-9941"
	W0609 01:41:18.758544  352096 addons.go:147] addon default-storageclass should already be in state true
	I0609 01:41:18.758573  352096 host.go:66] Checking if "calico-20210609012810-9941" exists ...
	I0609 01:41:18.759066  352096 cli_runner.go:115] Run: docker container inspect calico-20210609012810-9941 --format={{.State.Status}}
	I0609 01:41:18.770750  352096 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32985 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/machines/calico-20210609012810-9941/id_rsa Username:docker}
	I0609 01:41:18.794220  352096 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32985 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/machines/calico-20210609012810-9941/id_rsa Username:docker}
	I0609 01:41:18.806700  352096 addons.go:275] installing /etc/kubernetes/addons/storageclass.yaml
	I0609 01:41:18.806724  352096 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0609 01:41:18.806770  352096 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" calico-20210609012810-9941
	I0609 01:41:18.861723  352096 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32985 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/machines/calico-20210609012810-9941/id_rsa Username:docker}
	I0609 01:41:19.254824  352096 ssh_runner.go:149] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.7/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0609 01:41:19.257472  352096 ssh_runner.go:149] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.7/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0609 01:41:19.269050  352096 ssh_runner.go:149] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.7/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.58.1 host.minikube.internal\n           fallthrough\n        }' | sudo /var/lib/minikube/binaries/v1.20.7/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0609 01:41:19.269206  352096 docker.go:535] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-proxy:v1.20.7
	k8s.gcr.io/kube-apiserver:v1.20.7
	k8s.gcr.io/kube-controller-manager:v1.20.7
	k8s.gcr.io/kube-scheduler:v1.20.7
	gcr.io/k8s-minikube/storage-provisioner:v5
	kubernetesui/dashboard:v2.1.0
	k8s.gcr.io/etcd:3.4.13-0
	k8s.gcr.io/coredns:1.7.0
	kubernetesui/metrics-scraper:v1.0.4
	k8s.gcr.io/pause:3.2
	
	-- /stdout --
	I0609 01:41:19.269224  352096 docker.go:541] minikube-local-cache-test:functional-20210609010438-9941 wasn't preloaded
	I0609 01:41:19.269233  352096 cache_images.go:78] LoadImages start: [minikube-local-cache-test:functional-20210609010438-9941]
	I0609 01:41:19.270563  352096 node_ready.go:35] waiting up to 5m0s for node "calico-20210609012810-9941" to be "Ready" ...
	I0609 01:41:19.270617  352096 image.go:133] retrieving image: minikube-local-cache-test:functional-20210609010438-9941
	I0609 01:41:19.270639  352096 image.go:139] checking repository: index.docker.io/library/minikube-local-cache-test
	I0609 01:41:19.344594  352096 node_ready.go:49] node "calico-20210609012810-9941" has status "Ready":"True"
	I0609 01:41:19.344625  352096 node_ready.go:38] duration metric: took 74.017948ms waiting for node "calico-20210609012810-9941" to be "Ready" ...
	I0609 01:41:19.344637  352096 pod_ready.go:35] extra waiting up to 5m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0609 01:41:19.359631  352096 pod_ready.go:78] waiting up to 5m0s for pod "calico-kube-controllers-55ffdb7658-gltlk" in "kube-system" namespace to be "Ready" ...
	W0609 01:41:20.095801  352096 image.go:146] remote: HEAD https://index.docker.io/v2/library/minikube-local-cache-test/manifests/functional-20210609010438-9941: unexpected status code 401 Unauthorized (HEAD responses have no body, use GET for details)
	I0609 01:41:20.095863  352096 image.go:147] short name: minikube-local-cache-test:functional-20210609010438-9941
	I0609 01:41:20.096813  352096 image.go:175] daemon lookup for minikube-local-cache-test:functional-20210609010438-9941: Error response from daemon: reference does not exist
	I0609 01:41:20.438848  352096 ssh_runner.go:189] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.7/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.18134229s)
	I0609 01:41:20.438935  352096 ssh_runner.go:189] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.7/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.58.1 host.minikube.internal\n           fallthrough\n        }' | sudo /var/lib/minikube/binaries/v1.20.7/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.169850353s)
	I0609 01:41:20.438963  352096 start.go:725] {"host.minikube.internal": 192.168.58.1} host record injected into CoreDNS
	I0609 01:41:20.441405  352096 out.go:170] * Enabled addons: default-storageclass, storage-provisioner
	I0609 01:41:20.441438  352096 addons.go:344] enableAddons completed in 1.776732349s
	W0609 01:41:20.710811  352096 image.go:185] authn lookup for minikube-local-cache-test:functional-20210609010438-9941 (trying anon): GET https://index.docker.io/v2/library/minikube-local-cache-test/manifests/functional-20210609010438-9941: UNAUTHORIZED: authentication required; [map[Action:pull Class: Name:library/minikube-local-cache-test Type:repository]]
	I0609 01:41:21.301766  352096 image.go:189] remote lookup for minikube-local-cache-test:functional-20210609010438-9941: GET https://index.docker.io/v2/library/minikube-local-cache-test/manifests/functional-20210609010438-9941: UNAUTHORIZED: authentication required; [map[Action:pull Class: Name:library/minikube-local-cache-test Type:repository]]
	I0609 01:41:21.301819  352096 image.go:92] error retrieve Image minikube-local-cache-test:functional-20210609010438-9941 ref GET https://index.docker.io/v2/library/minikube-local-cache-test/manifests/functional-20210609010438-9941: UNAUTHORIZED: authentication required; [map[Action:pull Class: Name:library/minikube-local-cache-test Type:repository]] 
	I0609 01:41:21.301851  352096 cache_images.go:106] "minikube-local-cache-test:functional-20210609010438-9941" needs transfer: got empty img digest "" for minikube-local-cache-test:functional-20210609010438-9941
	I0609 01:41:21.301896  352096 docker.go:236] Removing image: minikube-local-cache-test:functional-20210609010438-9941
	I0609 01:41:21.301940  352096 ssh_runner.go:149] Run: docker rmi minikube-local-cache-test:functional-20210609010438-9941
	I0609 01:41:21.448602  352096 pod_ready.go:102] pod "calico-kube-controllers-55ffdb7658-gltlk" in "kube-system" namespace has status "Ready":"False"
	I0609 01:41:21.464097  352096 cache_images.go:279] Loading image from: /home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/cache/images/minikube-local-cache-test_functional-20210609010438-9941
	I0609 01:41:21.464209  352096 ssh_runner.go:149] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/minikube-local-cache-test_functional-20210609010438-9941
	I0609 01:41:21.467662  352096 ssh_runner.go:306] existence check for /var/lib/minikube/images/minikube-local-cache-test_functional-20210609010438-9941: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/minikube-local-cache-test_functional-20210609010438-9941: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot stat '/var/lib/minikube/images/minikube-local-cache-test_functional-20210609010438-9941': No such file or directory
	I0609 01:41:21.467695  352096 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/cache/images/minikube-local-cache-test_functional-20210609010438-9941 --> /var/lib/minikube/images/minikube-local-cache-test_functional-20210609010438-9941 (5120 bytes)
	I0609 01:41:21.553071  352096 docker.go:203] Loading image: /var/lib/minikube/images/minikube-local-cache-test_functional-20210609010438-9941
	I0609 01:41:21.553158  352096 ssh_runner.go:149] Run: docker load -i /var/lib/minikube/images/minikube-local-cache-test_functional-20210609010438-9941
	I0609 01:41:19.755463  329232 cli_runner.go:115] Run: docker container inspect auto-20210609012809-9941 --format={{.State.Status}}
	I0609 01:41:19.524872  300573 system_pods.go:86] 7 kube-system pods found
	I0609 01:41:19.524911  300573 system_pods.go:89] "coredns-fb8b8dccf-ctgrx" [adffdf7a-c8c3-11eb-a78f-02427f02d9a2] Running
	I0609 01:41:19.524921  300573 system_pods.go:89] "kube-apiserver-old-k8s-version-20210609012901-9941" [c8550af5-c8c3-11eb-a78f-02427f02d9a2] Pending
	I0609 01:41:19.524931  300573 system_pods.go:89] "kube-controller-manager-old-k8s-version-20210609012901-9941" [c8ed9fe0-c8c3-11eb-a78f-02427f02d9a2] Pending
	I0609 01:41:19.524938  300573 system_pods.go:89] "kube-proxy-97rr9" [ae0f0e5a-c8c3-11eb-a78f-02427f02d9a2] Running
	I0609 01:41:19.524948  300573 system_pods.go:89] "kube-scheduler-old-k8s-version-20210609012901-9941" [c768fa6f-c8c3-11eb-a78f-02427f02d9a2] Running
	I0609 01:41:19.524961  300573 system_pods.go:89] "metrics-server-8546d8b77b-lqx7b" [afea2287-c8c3-11eb-a78f-02427f02d9a2] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0609 01:41:19.524978  300573 system_pods.go:89] "storage-provisioner" [af32324c-c8c3-11eb-a78f-02427f02d9a2] Running
	I0609 01:41:19.524996  300573 retry.go:31] will retry after 6.062999549s: missing components: etcd, kube-apiserver, kube-controller-manager
	I0609 01:41:21.419636  344705 pod_ready.go:102] pod "cilium-2rdhk" in "kube-system" namespace has status "Ready":"False"
	I0609 01:41:23.919505  344705 pod_ready.go:102] pod "cilium-2rdhk" in "kube-system" namespace has status "Ready":"False"
	I0609 01:41:21.913966  352096 cache_images.go:308] Transferred and loaded /home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/cache/images/minikube-local-cache-test_functional-20210609010438-9941 from cache
	I0609 01:41:21.914009  352096 cache_images.go:113] Successfully loaded all cached images
	I0609 01:41:21.914025  352096 cache_images.go:82] LoadImages completed in 2.644783095s
	I0609 01:41:21.914043  352096 cache_images.go:252] succeeded pushing to: calico-20210609012810-9941
	I0609 01:41:21.914049  352096 cache_images.go:253] failed pushing to: 
	I0609 01:41:23.875804  352096 pod_ready.go:102] pod "calico-kube-controllers-55ffdb7658-gltlk" in "kube-system" namespace has status "Ready":"False"
	I0609 01:41:25.876212  352096 pod_ready.go:102] pod "calico-kube-controllers-55ffdb7658-gltlk" in "kube-system" namespace has status "Ready":"False"
	I0609 01:41:22.798808  329232 cli_runner.go:115] Run: docker container inspect auto-20210609012809-9941 --format={{.State.Status}}
	I0609 01:41:25.839455  329232 cli_runner.go:115] Run: docker container inspect auto-20210609012809-9941 --format={{.State.Status}}
	I0609 01:41:25.592272  300573 system_pods.go:86] 7 kube-system pods found
	I0609 01:41:25.592298  300573 system_pods.go:89] "coredns-fb8b8dccf-ctgrx" [adffdf7a-c8c3-11eb-a78f-02427f02d9a2] Running
	I0609 01:41:25.592304  300573 system_pods.go:89] "kube-apiserver-old-k8s-version-20210609012901-9941" [c8550af5-c8c3-11eb-a78f-02427f02d9a2] Pending
	I0609 01:41:25.592308  300573 system_pods.go:89] "kube-controller-manager-old-k8s-version-20210609012901-9941" [c8ed9fe0-c8c3-11eb-a78f-02427f02d9a2] Pending
	I0609 01:41:25.592311  300573 system_pods.go:89] "kube-proxy-97rr9" [ae0f0e5a-c8c3-11eb-a78f-02427f02d9a2] Running
	I0609 01:41:25.592317  300573 system_pods.go:89] "kube-scheduler-old-k8s-version-20210609012901-9941" [c768fa6f-c8c3-11eb-a78f-02427f02d9a2] Running
	I0609 01:41:25.592325  300573 system_pods.go:89] "metrics-server-8546d8b77b-lqx7b" [afea2287-c8c3-11eb-a78f-02427f02d9a2] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0609 01:41:25.592331  300573 system_pods.go:89] "storage-provisioner" [af32324c-c8c3-11eb-a78f-02427f02d9a2] Running
	I0609 01:41:25.592342  300573 retry.go:31] will retry after 10.504197539s: missing components: etcd, kube-apiserver, kube-controller-manager
	I0609 01:41:25.919767  344705 pod_ready.go:102] pod "cilium-2rdhk" in "kube-system" namespace has status "Ready":"False"
	I0609 01:41:28.419788  344705 pod_ready.go:102] pod "cilium-2rdhk" in "kube-system" namespace has status "Ready":"False"
	I0609 01:41:28.920252  344705 pod_ready.go:92] pod "cilium-2rdhk" in "kube-system" namespace has status "Ready":"True"
	I0609 01:41:28.920277  344705 pod_ready.go:81] duration metric: took 36.018972007s waiting for pod "cilium-2rdhk" in "kube-system" namespace to be "Ready" ...
	I0609 01:41:28.920288  344705 pod_ready.go:78] waiting up to 5m0s for pod "cilium-operator-7c755f4594-2x5fn" in "kube-system" namespace to be "Ready" ...
	I0609 01:41:28.924675  344705 pod_ready.go:92] pod "cilium-operator-7c755f4594-2x5fn" in "kube-system" namespace has status "Ready":"True"
	I0609 01:41:28.924691  344705 pod_ready.go:81] duration metric: took 4.397091ms waiting for pod "cilium-operator-7c755f4594-2x5fn" in "kube-system" namespace to be "Ready" ...
	I0609 01:41:28.924702  344705 pod_ready.go:78] waiting up to 5m0s for pod "coredns-74ff55c5b-42mdv" in "kube-system" namespace to be "Ready" ...
	I0609 01:41:28.929071  344705 pod_ready.go:92] pod "coredns-74ff55c5b-42mdv" in "kube-system" namespace has status "Ready":"True"
	I0609 01:41:28.929091  344705 pod_ready.go:81] duration metric: took 4.382306ms waiting for pod "coredns-74ff55c5b-42mdv" in "kube-system" namespace to be "Ready" ...
	I0609 01:41:28.929102  344705 pod_ready.go:78] waiting up to 5m0s for pod "coredns-74ff55c5b-jv4pl" in "kube-system" namespace to be "Ready" ...
	I0609 01:41:28.931060  344705 pod_ready.go:97] error getting pod "coredns-74ff55c5b-jv4pl" in "kube-system" namespace (skipping!): pods "coredns-74ff55c5b-jv4pl" not found
	I0609 01:41:28.931084  344705 pod_ready.go:81] duration metric: took 1.975143ms waiting for pod "coredns-74ff55c5b-jv4pl" in "kube-system" namespace to be "Ready" ...
	E0609 01:41:28.931095  344705 pod_ready.go:66] WaitExtra: waitPodCondition: error getting pod "coredns-74ff55c5b-jv4pl" in "kube-system" namespace (skipping!): pods "coredns-74ff55c5b-jv4pl" not found
	I0609 01:41:28.931103  344705 pod_ready.go:78] waiting up to 5m0s for pod "etcd-cilium-20210609012810-9941" in "kube-system" namespace to be "Ready" ...
	I0609 01:41:27.876306  352096 pod_ready.go:102] pod "calico-kube-controllers-55ffdb7658-gltlk" in "kube-system" namespace has status "Ready":"False"
	I0609 01:41:30.376138  352096 pod_ready.go:102] pod "calico-kube-controllers-55ffdb7658-gltlk" in "kube-system" namespace has status "Ready":"False"
	I0609 01:41:28.884648  329232 cli_runner.go:115] Run: docker container inspect auto-20210609012809-9941 --format={{.State.Status}}
	I0609 01:41:31.933672  329232 stop.go:59] stop err: Maximum number of retries (60) exceeded
	I0609 01:41:31.933729  329232 delete.go:129] stophost failed (probably ok): Temporary Error: stop: Maximum number of retries (60) exceeded
	I0609 01:41:31.934195  329232 cli_runner.go:115] Run: docker container inspect auto-20210609012809-9941 --format={{.State.Status}}
	W0609 01:41:31.985166  329232 delete.go:135] deletehost failed: Docker machine "auto-20210609012809-9941" does not exist. Use "docker-machine ls" to list machines. Use "docker-machine create" to add a new one.
	I0609 01:41:31.985255  329232 cli_runner.go:115] Run: docker container inspect -f {{.Id}} auto-20210609012809-9941
	I0609 01:41:32.031852  329232 cli_runner.go:115] Run: docker container inspect auto-20210609012809-9941 --format={{.State.Status}}
	I0609 01:41:32.081551  329232 cli_runner.go:115] Run: docker exec --privileged -t auto-20210609012809-9941 /bin/bash -c "sudo init 0"
	W0609 01:41:32.125884  329232 cli_runner.go:162] docker exec --privileged -t auto-20210609012809-9941 /bin/bash -c "sudo init 0" returned with exit code 1
	I0609 01:41:32.125930  329232 oci.go:632] error shutdown auto-20210609012809-9941: docker exec --privileged -t auto-20210609012809-9941 /bin/bash -c "sudo init 0": exit status 1
	stdout:
	
	stderr:
	Error response from daemon: Container bc54bc9bf415ee2bb0df1bcad0aed4e971bd39991c0782ffae750733117660bd is not running
	I0609 01:41:33.127009  329232 cli_runner.go:115] Run: docker container inspect auto-20210609012809-9941 --format={{.State.Status}}
	I0609 01:41:33.188615  329232 oci.go:646] temporary error: container auto-20210609012809-9941 status is  but expect it to be exited
	I0609 01:41:33.188641  329232 oci.go:652] Successfully shutdown container auto-20210609012809-9941
	I0609 01:41:33.188680  329232 cli_runner.go:115] Run: docker rm -f -v auto-20210609012809-9941
	I0609 01:41:33.232875  329232 cli_runner.go:115] Run: docker container inspect -f {{.Id}} auto-20210609012809-9941
	W0609 01:41:33.278916  329232 cli_runner.go:162] docker container inspect -f {{.Id}} auto-20210609012809-9941 returned with exit code 1
	I0609 01:41:33.279004  329232 cli_runner.go:115] Run: docker network inspect auto-20210609012809-9941 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0609 01:41:33.317124  329232 cli_runner.go:162] docker network inspect auto-20210609012809-9941 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0609 01:41:33.317184  329232 network_create.go:255] running [docker network inspect auto-20210609012809-9941] to gather additional debugging logs...
	I0609 01:41:33.317205  329232 cli_runner.go:115] Run: docker network inspect auto-20210609012809-9941
	W0609 01:41:33.354864  329232 cli_runner.go:162] docker network inspect auto-20210609012809-9941 returned with exit code 1
	I0609 01:41:33.354894  329232 network_create.go:258] error running [docker network inspect auto-20210609012809-9941]: docker network inspect auto-20210609012809-9941: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: auto-20210609012809-9941
	I0609 01:41:33.354910  329232 network_create.go:260] output of [docker network inspect auto-20210609012809-9941]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: auto-20210609012809-9941
	
	** /stderr **
	W0609 01:41:33.355033  329232 delete.go:139] delete failed (probably ok) <nil>
	I0609 01:41:33.355043  329232 fix.go:120] Sleeping 1 second for extra luck!
	I0609 01:41:34.355909  329232 start.go:126] createHost starting for "" (driver="docker")
	I0609 01:41:30.941410  344705 pod_ready.go:102] pod "etcd-cilium-20210609012810-9941" in "kube-system" namespace has status "Ready":"False"
	I0609 01:41:32.942019  344705 pod_ready.go:102] pod "etcd-cilium-20210609012810-9941" in "kube-system" namespace has status "Ready":"False"
	I0609 01:41:34.942818  344705 pod_ready.go:102] pod "etcd-cilium-20210609012810-9941" in "kube-system" namespace has status "Ready":"False"
	I0609 01:41:32.377229  352096 pod_ready.go:102] pod "calico-kube-controllers-55ffdb7658-gltlk" in "kube-system" namespace has status "Ready":"False"
	I0609 01:41:34.876436  352096 pod_ready.go:102] pod "calico-kube-controllers-55ffdb7658-gltlk" in "kube-system" namespace has status "Ready":"False"
	I0609 01:41:34.358151  329232 out.go:197] * Creating docker container (CPUs=2, Memory=2048MB) ...
	I0609 01:41:34.358255  329232 start.go:160] libmachine.API.Create for "auto-20210609012809-9941" (driver="docker")
	I0609 01:41:34.358292  329232 client.go:168] LocalClient.Create starting
	I0609 01:41:34.358357  329232 main.go:128] libmachine: Reading certificate data from /home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/certs/ca.pem
	I0609 01:41:34.358386  329232 main.go:128] libmachine: Decoding PEM data...
	I0609 01:41:34.358404  329232 main.go:128] libmachine: Parsing certificate...
	I0609 01:41:34.358508  329232 main.go:128] libmachine: Reading certificate data from /home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/certs/cert.pem
	I0609 01:41:34.358532  329232 main.go:128] libmachine: Decoding PEM data...
	I0609 01:41:34.358541  329232 main.go:128] libmachine: Parsing certificate...
	I0609 01:41:34.358756  329232 cli_runner.go:115] Run: docker network inspect auto-20210609012809-9941 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0609 01:41:34.402255  329232 cli_runner.go:162] docker network inspect auto-20210609012809-9941 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0609 01:41:34.402349  329232 network_create.go:255] running [docker network inspect auto-20210609012809-9941] to gather additional debugging logs...
	I0609 01:41:34.402373  329232 cli_runner.go:115] Run: docker network inspect auto-20210609012809-9941
	W0609 01:41:34.447755  329232 cli_runner.go:162] docker network inspect auto-20210609012809-9941 returned with exit code 1
	I0609 01:41:34.447782  329232 network_create.go:258] error running [docker network inspect auto-20210609012809-9941]: docker network inspect auto-20210609012809-9941: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: auto-20210609012809-9941
	I0609 01:41:34.447793  329232 network_create.go:260] output of [docker network inspect auto-20210609012809-9941]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: auto-20210609012809-9941
	
	** /stderr **
	I0609 01:41:34.447829  329232 cli_runner.go:115] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0609 01:41:34.487524  329232 network.go:215] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName:br-3efa0710be1e IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:02:42:6f:c7:95:89}}
	I0609 01:41:34.488287  329232 network.go:215] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 Interface:{IfaceName:br-494a1c72530c IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:02:42:2d:51:70:a3}}
	I0609 01:41:34.489047  329232 network.go:215] skipping subnet 192.168.67.0/24 that is taken: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 Interface:{IfaceName:br-3b40e12707af IfaceIPv4:192.168.67.1 IfaceMTU:1500 IfaceMAC:02:42:ac:37:f7:3a}}
	I0609 01:41:34.489905  329232 network.go:263] reserving subnet 192.168.76.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[] amended:true}} dirty:map[192.168.49.0:0xc000136218 192.168.76.0:0xc000408548] misses:0}
	I0609 01:41:34.489944  329232 network.go:210] using free private subnet 192.168.76.0/24: &{IP:192.168.76.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.76.0/24 Gateway:192.168.76.1 ClientMin:192.168.76.2 ClientMax:192.168.76.254 Broadcast:192.168.76.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0609 01:41:34.489977  329232 network_create.go:106] attempt to create docker network auto-20210609012809-9941 192.168.76.0/24 with gateway 192.168.76.1 and MTU of 1500 ...
	I0609 01:41:34.490049  329232 cli_runner.go:115] Run: docker network create --driver=bridge --subnet=192.168.76.0/24 --gateway=192.168.76.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true auto-20210609012809-9941
	I0609 01:41:34.563866  329232 network_create.go:90] docker network auto-20210609012809-9941 192.168.76.0/24 created
	I0609 01:41:34.563896  329232 kic.go:106] calculated static IP "192.168.76.2" for the "auto-20210609012809-9941" container
	I0609 01:41:34.563950  329232 cli_runner.go:115] Run: docker ps -a --format {{.Names}}
	I0609 01:41:34.605010  329232 cli_runner.go:115] Run: docker volume create auto-20210609012809-9941 --label name.minikube.sigs.k8s.io=auto-20210609012809-9941 --label created_by.minikube.sigs.k8s.io=true
	I0609 01:41:34.642891  329232 oci.go:102] Successfully created a docker volume auto-20210609012809-9941
	I0609 01:41:34.642974  329232 cli_runner.go:115] Run: docker run --rm --name auto-20210609012809-9941-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=auto-20210609012809-9941 --entrypoint /usr/bin/test -v auto-20210609012809-9941:/var gcr.io/k8s-minikube/kicbase:v0.0.23@sha256:baf6d94b2050bcbecd98994e265cf965a4f4768978620ccf5227a6dcb75ade45 -d /var/lib
	I0609 01:41:35.363820  329232 oci.go:106] Successfully prepared a docker volume auto-20210609012809-9941
	W0609 01:41:35.363866  329232 oci.go:135] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W0609 01:41:35.363875  329232 oci.go:119] Your kernel does not support memory limit capabilities or the cgroup is not mounted.
	I0609 01:41:35.363883  329232 preload.go:110] Checking if preload exists for k8s version v1.20.7 and runtime docker
	I0609 01:41:35.363916  329232 kic.go:179] Starting extracting preloaded images to volume ...
	I0609 01:41:35.363930  329232 cli_runner.go:115] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0609 01:41:35.363995  329232 cli_runner.go:115] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v11-v1.20.7-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v auto-20210609012809-9941:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.23@sha256:baf6d94b2050bcbecd98994e265cf965a4f4768978620ccf5227a6dcb75ade45 -I lz4 -xf /preloaded.tar -C /extractDir
	I0609 01:41:35.467993  329232 cli_runner.go:115] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname auto-20210609012809-9941 --name auto-20210609012809-9941 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=auto-20210609012809-9941 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=auto-20210609012809-9941 --network auto-20210609012809-9941 --ip 192.168.76.2 --volume auto-20210609012809-9941:/var --security-opt apparmor=unconfined --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase:v0.0.23@sha256:baf6d94b2050bcbecd98994e265cf965a4f4768978620ccf5227a6dcb75ade45
	I0609 01:41:35.995981  329232 cli_runner.go:115] Run: docker container inspect auto-20210609012809-9941 --format={{.State.Running}}
	I0609 01:41:36.052103  329232 cli_runner.go:115] Run: docker container inspect auto-20210609012809-9941 --format={{.State.Status}}
	I0609 01:41:36.105861  329232 cli_runner.go:115] Run: docker exec auto-20210609012809-9941 stat /var/lib/dpkg/alternatives/iptables
	I0609 01:41:36.272972  329232 oci.go:278] the created container "auto-20210609012809-9941" has a running status.
	I0609 01:41:36.273013  329232 kic.go:210] Creating ssh key for kic: /home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/machines/auto-20210609012809-9941/id_rsa...
	I0609 01:41:36.425757  329232 kic_runner.go:188] docker (temp): /home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/machines/auto-20210609012809-9941/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0609 01:41:36.825610  329232 cli_runner.go:115] Run: docker container inspect auto-20210609012809-9941 --format={{.State.Status}}
	I0609 01:41:36.868189  329232 kic_runner.go:94] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0609 01:41:36.868214  329232 kic_runner.go:115] Args: [docker exec --privileged auto-20210609012809-9941 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0609 01:41:36.102263  300573 system_pods.go:86] 8 kube-system pods found
	I0609 01:41:36.102300  300573 system_pods.go:89] "coredns-fb8b8dccf-ctgrx" [adffdf7a-c8c3-11eb-a78f-02427f02d9a2] Running
	I0609 01:41:36.102308  300573 system_pods.go:89] "etcd-old-k8s-version-20210609012901-9941" [d1de6264-c8c3-11eb-a78f-02427f02d9a2] Pending
	I0609 01:41:36.102315  300573 system_pods.go:89] "kube-apiserver-old-k8s-version-20210609012901-9941" [c8550af5-c8c3-11eb-a78f-02427f02d9a2] Running
	I0609 01:41:36.102323  300573 system_pods.go:89] "kube-controller-manager-old-k8s-version-20210609012901-9941" [c8ed9fe0-c8c3-11eb-a78f-02427f02d9a2] Running
	I0609 01:41:36.102329  300573 system_pods.go:89] "kube-proxy-97rr9" [ae0f0e5a-c8c3-11eb-a78f-02427f02d9a2] Running
	I0609 01:41:36.102336  300573 system_pods.go:89] "kube-scheduler-old-k8s-version-20210609012901-9941" [c768fa6f-c8c3-11eb-a78f-02427f02d9a2] Running
	I0609 01:41:36.102347  300573 system_pods.go:89] "metrics-server-8546d8b77b-lqx7b" [afea2287-c8c3-11eb-a78f-02427f02d9a2] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0609 01:41:36.102364  300573 system_pods.go:89] "storage-provisioner" [af32324c-c8c3-11eb-a78f-02427f02d9a2] Running
	I0609 01:41:36.102381  300573 retry.go:31] will retry after 12.194240946s: missing components: etcd
	I0609 01:41:37.093269  344705 pod_ready.go:102] pod "etcd-cilium-20210609012810-9941" in "kube-system" namespace has status "Ready":"False"
	I0609 01:41:39.442809  344705 pod_ready.go:102] pod "etcd-cilium-20210609012810-9941" in "kube-system" namespace has status "Ready":"False"
	I0609 01:41:39.940516  344705 pod_ready.go:92] pod "etcd-cilium-20210609012810-9941" in "kube-system" namespace has status "Ready":"True"
	I0609 01:41:39.940545  344705 pod_ready.go:81] duration metric: took 11.009433469s waiting for pod "etcd-cilium-20210609012810-9941" in "kube-system" namespace to be "Ready" ...
	I0609 01:41:39.940560  344705 pod_ready.go:78] waiting up to 5m0s for pod "kube-apiserver-cilium-20210609012810-9941" in "kube-system" namespace to be "Ready" ...
	I0609 01:41:39.944617  344705 pod_ready.go:92] pod "kube-apiserver-cilium-20210609012810-9941" in "kube-system" namespace has status "Ready":"True"
	I0609 01:41:39.944633  344705 pod_ready.go:81] duration metric: took 4.066455ms waiting for pod "kube-apiserver-cilium-20210609012810-9941" in "kube-system" namespace to be "Ready" ...
	I0609 01:41:39.944642  344705 pod_ready.go:78] waiting up to 5m0s for pod "kube-controller-manager-cilium-20210609012810-9941" in "kube-system" namespace to be "Ready" ...
	I0609 01:41:37.080706  352096 pod_ready.go:102] pod "calico-kube-controllers-55ffdb7658-gltlk" in "kube-system" namespace has status "Ready":"False"
	I0609 01:41:39.379466  352096 pod_ready.go:102] pod "calico-kube-controllers-55ffdb7658-gltlk" in "kube-system" namespace has status "Ready":"False"
	I0609 01:41:41.383974  352096 pod_ready.go:102] pod "calico-kube-controllers-55ffdb7658-gltlk" in "kube-system" namespace has status "Ready":"False"
	I0609 01:41:39.584397  329232 cli_runner.go:168] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v11-v1.20.7-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v auto-20210609012809-9941:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.23@sha256:baf6d94b2050bcbecd98994e265cf965a4f4768978620ccf5227a6dcb75ade45 -I lz4 -xf /preloaded.tar -C /extractDir: (4.220346647s)
	I0609 01:41:39.584427  329232 kic.go:188] duration metric: took 4.220510 seconds to extract preloaded images to volume
	I0609 01:41:39.584497  329232 cli_runner.go:115] Run: docker container inspect auto-20210609012809-9941 --format={{.State.Status}}
	I0609 01:41:39.635769  329232 machine.go:88] provisioning docker machine ...
	I0609 01:41:39.635827  329232 ubuntu.go:169] provisioning hostname "auto-20210609012809-9941"
	I0609 01:41:39.635904  329232 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-20210609012809-9941
	I0609 01:41:39.684460  329232 main.go:128] libmachine: Using SSH client type: native
	I0609 01:41:39.684645  329232 main.go:128] libmachine: &{{{<nil> 0 [] [] []} docker [0x802f80] 0x802f40 <nil>  [] 0s} 127.0.0.1 32990 <nil> <nil>}
	I0609 01:41:39.684660  329232 main.go:128] libmachine: About to run SSH command:
	sudo hostname auto-20210609012809-9941 && echo "auto-20210609012809-9941" | sudo tee /etc/hostname
	I0609 01:41:39.841506  329232 main.go:128] libmachine: SSH cmd err, output: <nil>: auto-20210609012809-9941
	
	I0609 01:41:39.841577  329232 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-20210609012809-9941
	I0609 01:41:39.885725  329232 main.go:128] libmachine: Using SSH client type: native
	I0609 01:41:39.885870  329232 main.go:128] libmachine: &{{{<nil> 0 [] [] []} docker [0x802f80] 0x802f40 <nil>  [] 0s} 127.0.0.1 32990 <nil> <nil>}
	I0609 01:41:39.885889  329232 main.go:128] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sauto-20210609012809-9941' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 auto-20210609012809-9941/g' /etc/hosts;
				else 
					echo '127.0.1.1 auto-20210609012809-9941' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0609 01:41:40.009081  329232 main.go:128] libmachine: SSH cmd err, output: <nil>: 
	I0609 01:41:40.009113  329232 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube CaCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/certs/key.pem ServerCertR
emotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube}
	I0609 01:41:40.009136  329232 ubuntu.go:177] setting up certificates
	I0609 01:41:40.009147  329232 provision.go:83] configureAuth start
	I0609 01:41:40.009201  329232 cli_runner.go:115] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" auto-20210609012809-9941
	I0609 01:41:40.054568  329232 provision.go:137] copyHostCerts
	I0609 01:41:40.054639  329232 exec_runner.go:145] found /home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/ca.pem, removing ...
	I0609 01:41:40.054650  329232 exec_runner.go:190] rm: /home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/ca.pem
	I0609 01:41:40.054702  329232 exec_runner.go:152] cp: /home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/ca.pem (1082 bytes)
	I0609 01:41:40.054772  329232 exec_runner.go:145] found /home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/cert.pem, removing ...
	I0609 01:41:40.054816  329232 exec_runner.go:190] rm: /home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/cert.pem
	I0609 01:41:40.054836  329232 exec_runner.go:152] cp: /home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/cert.pem (1123 bytes)
	I0609 01:41:40.054888  329232 exec_runner.go:145] found /home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/key.pem, removing ...
	I0609 01:41:40.054896  329232 exec_runner.go:190] rm: /home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/key.pem
	I0609 01:41:40.054916  329232 exec_runner.go:152] cp: /home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/key.pem (1679 bytes)
	I0609 01:41:40.054956  329232 provision.go:111] generating server cert: /home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/certs/ca-key.pem org=jenkins.auto-20210609012809-9941 san=[192.168.76.2 127.0.0.1 localhost 127.0.0.1 minikube auto-20210609012809-9941]
	I0609 01:41:40.199140  329232 provision.go:171] copyRemoteCerts
	I0609 01:41:40.199207  329232 ssh_runner.go:149] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0609 01:41:40.199267  329232 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-20210609012809-9941
	I0609 01:41:40.240189  329232 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32990 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/machines/auto-20210609012809-9941/id_rsa Username:docker}
	I0609 01:41:40.339747  329232 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0609 01:41:40.358551  329232 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0609 01:41:40.377700  329232 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I0609 01:41:40.396157  329232 provision.go:86] duration metric: configureAuth took 386.999034ms
	I0609 01:41:40.396180  329232 ubuntu.go:193] setting minikube options for container-runtime
	I0609 01:41:40.396396  329232 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-20210609012809-9941
	I0609 01:41:40.437678  329232 main.go:128] libmachine: Using SSH client type: native
	I0609 01:41:40.437928  329232 main.go:128] libmachine: &{{{<nil> 0 [] [] []} docker [0x802f80] 0x802f40 <nil>  [] 0s} 127.0.0.1 32990 <nil> <nil>}
	I0609 01:41:40.437947  329232 main.go:128] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0609 01:41:40.565938  329232 main.go:128] libmachine: SSH cmd err, output: <nil>: overlay
	
	I0609 01:41:40.565966  329232 ubuntu.go:71] root file system type: overlay
	I0609 01:41:40.566224  329232 provision.go:308] Updating docker unit: /lib/systemd/system/docker.service ...
	I0609 01:41:40.566318  329232 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-20210609012809-9941
	I0609 01:41:40.609110  329232 main.go:128] libmachine: Using SSH client type: native
	I0609 01:41:40.609254  329232 main.go:128] libmachine: &{{{<nil> 0 [] [] []} docker [0x802f80] 0x802f40 <nil>  [] 0s} 127.0.0.1 32990 <nil> <nil>}
	I0609 01:41:40.609318  329232 main.go:128] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0609 01:41:40.742784  329232 main.go:128] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0609 01:41:40.742865  329232 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-20210609012809-9941
	I0609 01:41:40.799645  329232 main.go:128] libmachine: Using SSH client type: native
	I0609 01:41:40.799898  329232 main.go:128] libmachine: &{{{<nil> 0 [] [] []} docker [0x802f80] 0x802f40 <nil>  [] 0s} 127.0.0.1 32990 <nil> <nil>}
	I0609 01:41:40.799934  329232 main.go:128] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0609 01:41:41.471089  329232 main.go:128] libmachine: SSH cmd err, output: <nil>: --- /lib/systemd/system/docker.service	2021-06-02 11:54:50.000000000 +0000
	+++ /lib/systemd/system/docker.service.new	2021-06-09 01:41:40.733754700 +0000
	@@ -1,30 +1,32 @@
	 [Unit]
	 Description=Docker Application Container Engine
	 Documentation=https://docs.docker.com
	+BindsTo=containerd.service
	 After=network-online.target firewalld.service containerd.service
	 Wants=network-online.target
	-Requires=docker.socket containerd.service
	+Requires=docker.socket
	+StartLimitBurst=3
	+StartLimitIntervalSec=60
	 
	 [Service]
	 Type=notify
	-# the default is not to use systemd for cgroups because the delegate issues still
	-# exists and systemd currently does not support the cgroup feature set required
	-# for containers run by docker
	-ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock
	-ExecReload=/bin/kill -s HUP $MAINPID
	-TimeoutSec=0
	-RestartSec=2
	-Restart=always
	-
	-# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229.
	-# Both the old, and new location are accepted by systemd 229 and up, so using the old location
	-# to make them work for either version of systemd.
	-StartLimitBurst=3
	+Restart=on-failure
	 
	-# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230.
	-# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make
	-# this option work for either version of systemd.
	-StartLimitInterval=60s
	+
	+
	+# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	+# The base configuration already specifies an 'ExecStart=...' command. The first directive
	+# here is to clear out that command inherited from the base configuration. Without this,
	+# the command from the base configuration and the command specified here are treated as
	+# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	+# will catch this invalid input and refuse to start the service with an error like:
	+#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	+
	+# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	+# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	+ExecStart=
	+ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	+ExecReload=/bin/kill -s HUP $MAINPID
	 
	 # Having non-zero Limit*s causes performance problems due to accounting overhead
	 # in the kernel. We recommend using cgroups to do container-local accounting.
	@@ -32,16 +34,16 @@
	 LimitNPROC=infinity
	 LimitCORE=infinity
	 
	-# Comment TasksMax if your systemd version does not support it.
	-# Only systemd 226 and above support this option.
	+# Uncomment TasksMax if your systemd version supports it.
	+# Only systemd 226 and above support this version.
	 TasksMax=infinity
	+TimeoutStartSec=0
	 
	 # set delegate yes so that systemd does not reset the cgroups of docker containers
	 Delegate=yes
	 
	 # kill only the docker process, not all processes in the cgroup
	 KillMode=process
	-OOMScoreAdjust=-500
	 
	 [Install]
	 WantedBy=multi-user.target
	Synchronizing state of docker.service with SysV service script with /lib/systemd/systemd-sysv-install.
	Executing: /lib/systemd/systemd-sysv-install enable docker
	
	I0609 01:41:41.471128  329232 machine.go:91] provisioned docker machine in 1.835332676s
	I0609 01:41:41.471143  329232 client.go:171] LocalClient.Create took 7.112842351s
	I0609 01:41:41.471164  329232 start.go:168] duration metric: libmachine.API.Create for "auto-20210609012809-9941" took 7.112906767s
	I0609 01:41:41.471179  329232 start.go:267] post-start starting for "auto-20210609012809-9941" (driver="docker")
	I0609 01:41:41.471186  329232 start.go:277] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0609 01:41:41.471252  329232 ssh_runner.go:149] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0609 01:41:41.471302  329232 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-20210609012809-9941
	I0609 01:41:41.519729  329232 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32990 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/machines/auto-20210609012809-9941/id_rsa Username:docker}
	I0609 01:41:41.609111  329232 ssh_runner.go:149] Run: cat /etc/os-release
	I0609 01:41:41.611701  329232 main.go:128] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0609 01:41:41.611732  329232 main.go:128] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0609 01:41:41.611740  329232 main.go:128] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0609 01:41:41.611745  329232 info.go:137] Remote host: Ubuntu 20.04.2 LTS
	I0609 01:41:41.611753  329232 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/addons for local assets ...
	I0609 01:41:41.611793  329232 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/files for local assets ...
	I0609 01:41:41.611879  329232 start.go:270] post-start completed in 140.693775ms
	I0609 01:41:41.612136  329232 cli_runner.go:115] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" auto-20210609012809-9941
	I0609 01:41:41.660654  329232 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/profiles/auto-20210609012809-9941/config.json ...
	I0609 01:41:41.660931  329232 ssh_runner.go:149] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0609 01:41:41.660996  329232 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-20210609012809-9941
	I0609 01:41:41.708265  329232 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32990 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/machines/auto-20210609012809-9941/id_rsa Username:docker}
	I0609 01:41:41.793790  329232 start.go:129] duration metric: createHost completed in 7.437849081s
	I0609 01:41:41.793878  329232 cli_runner.go:115] Run: docker container inspect auto-20210609012809-9941 --format={{.State.Status}}
	W0609 01:41:41.834734  329232 fix.go:134] unexpected machine state, will restart: <nil>
	I0609 01:41:41.834764  329232 machine.go:88] provisioning docker machine ...
	I0609 01:41:41.834786  329232 ubuntu.go:169] provisioning hostname "auto-20210609012809-9941"
	I0609 01:41:41.834833  329232 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-20210609012809-9941
	I0609 01:41:41.879476  329232 main.go:128] libmachine: Using SSH client type: native
	I0609 01:41:41.879641  329232 main.go:128] libmachine: &{{{<nil> 0 [] [] []} docker [0x802f80] 0x802f40 <nil>  [] 0s} 127.0.0.1 32990 <nil> <nil>}
	I0609 01:41:41.879661  329232 main.go:128] libmachine: About to run SSH command:
	sudo hostname auto-20210609012809-9941 && echo "auto-20210609012809-9941" | sudo tee /etc/hostname
	I0609 01:41:42.011151  329232 main.go:128] libmachine: SSH cmd err, output: <nil>: auto-20210609012809-9941
	
	I0609 01:41:42.011225  329232 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-20210609012809-9941
	I0609 01:41:42.061407  329232 main.go:128] libmachine: Using SSH client type: native
	I0609 01:41:42.061641  329232 main.go:128] libmachine: &{{{<nil> 0 [] [] []} docker [0x802f80] 0x802f40 <nil>  [] 0s} 127.0.0.1 32990 <nil> <nil>}
	I0609 01:41:42.061675  329232 main.go:128] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sauto-20210609012809-9941' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 auto-20210609012809-9941/g' /etc/hosts;
				else 
					echo '127.0.1.1 auto-20210609012809-9941' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0609 01:41:42.184948  329232 main.go:128] libmachine: SSH cmd err, output: <nil>: 
	I0609 01:41:42.184977  329232 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube CaCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/certs/key.pem ServerCertR
emotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube}
	I0609 01:41:42.185001  329232 ubuntu.go:177] setting up certificates
	I0609 01:41:42.185011  329232 provision.go:83] configureAuth start
	I0609 01:41:42.185062  329232 cli_runner.go:115] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" auto-20210609012809-9941
	I0609 01:41:42.223424  329232 provision.go:137] copyHostCerts
	I0609 01:41:42.223473  329232 exec_runner.go:145] found /home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/ca.pem, removing ...
	I0609 01:41:42.223480  329232 exec_runner.go:190] rm: /home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/ca.pem
	I0609 01:41:42.223524  329232 exec_runner.go:152] cp: /home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/ca.pem (1082 bytes)
	I0609 01:41:42.223592  329232 exec_runner.go:145] found /home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/cert.pem, removing ...
	I0609 01:41:42.223605  329232 exec_runner.go:190] rm: /home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/cert.pem
	I0609 01:41:42.223629  329232 exec_runner.go:152] cp: /home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/cert.pem (1123 bytes)
	I0609 01:41:42.223679  329232 exec_runner.go:145] found /home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/key.pem, removing ...
	I0609 01:41:42.223689  329232 exec_runner.go:190] rm: /home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/key.pem
	I0609 01:41:42.223706  329232 exec_runner.go:152] cp: /home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/key.pem (1679 bytes)
	I0609 01:41:42.223802  329232 provision.go:111] generating server cert: /home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/certs/ca-key.pem org=jenkins.auto-20210609012809-9941 san=[192.168.76.2 127.0.0.1 localhost 127.0.0.1 minikube auto-20210609012809-9941]
	I0609 01:41:42.486214  329232 provision.go:171] copyRemoteCerts
	I0609 01:41:42.486276  329232 ssh_runner.go:149] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0609 01:41:42.486327  329232 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-20210609012809-9941
	I0609 01:41:42.526157  329232 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32990 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/machines/auto-20210609012809-9941/id_rsa Username:docker}
	I0609 01:41:42.612850  329232 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0609 01:41:42.630046  329232 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0609 01:41:42.647341  329232 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/machines/server.pem --> /etc/docker/server.pem (1245 bytes)
	I0609 01:41:42.663823  329232 provision.go:86] duration metric: configureAuth took 478.797993ms
	I0609 01:41:42.663855  329232 ubuntu.go:193] setting minikube options for container-runtime
	I0609 01:41:42.664049  329232 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-20210609012809-9941
	I0609 01:41:42.708962  329232 main.go:128] libmachine: Using SSH client type: native
	I0609 01:41:42.709147  329232 main.go:128] libmachine: &{{{<nil> 0 [] [] []} docker [0x802f80] 0x802f40 <nil>  [] 0s} 127.0.0.1 32990 <nil> <nil>}
	I0609 01:41:42.709164  329232 main.go:128] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0609 01:41:42.837104  329232 main.go:128] libmachine: SSH cmd err, output: <nil>: overlay
	
	I0609 01:41:42.837131  329232 ubuntu.go:71] root file system type: overlay
	I0609 01:41:42.837293  329232 provision.go:308] Updating docker unit: /lib/systemd/system/docker.service ...
	I0609 01:41:42.837345  329232 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-20210609012809-9941
	I0609 01:41:42.884564  329232 main.go:128] libmachine: Using SSH client type: native
	I0609 01:41:42.884726  329232 main.go:128] libmachine: &{{{<nil> 0 [] [] []} docker [0x802f80] 0x802f40 <nil>  [] 0s} 127.0.0.1 32990 <nil> <nil>}
	I0609 01:41:42.884819  329232 main.go:128] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0609 01:41:43.017785  329232 main.go:128] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	BindsTo=containerd.service
	After=network-online.target firewalld.service containerd.service
	Wants=network-online.target
	Requires=docker.socket
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0609 01:41:43.017862  329232 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-20210609012809-9941
	I0609 01:41:43.058769  329232 main.go:128] libmachine: Using SSH client type: native
	I0609 01:41:43.058909  329232 main.go:128] libmachine: &{{{<nil> 0 [] [] []} docker [0x802f80] 0x802f40 <nil>  [] 0s} 127.0.0.1 32990 <nil> <nil>}
	I0609 01:41:43.058927  329232 main.go:128] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0609 01:41:43.180717  329232 main.go:128] libmachine: SSH cmd err, output: <nil>: 
	I0609 01:41:43.180750  329232 machine.go:91] provisioned docker machine in 1.345979023s
	I0609 01:41:43.180763  329232 start.go:267] post-start starting for "auto-20210609012809-9941" (driver="docker")
	I0609 01:41:43.180773  329232 start.go:277] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0609 01:41:43.180829  329232 ssh_runner.go:149] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0609 01:41:43.180871  329232 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-20210609012809-9941
	I0609 01:41:43.220933  329232 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32990 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/machines/auto-20210609012809-9941/id_rsa Username:docker}
	I0609 01:41:43.308831  329232 ssh_runner.go:149] Run: cat /etc/os-release
	I0609 01:41:43.311629  329232 main.go:128] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0609 01:41:43.311653  329232 main.go:128] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0609 01:41:43.311664  329232 main.go:128] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0609 01:41:43.311671  329232 info.go:137] Remote host: Ubuntu 20.04.2 LTS
	I0609 01:41:43.311681  329232 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/addons for local assets ...
	I0609 01:41:43.311732  329232 filesync.go:126] Scanning /home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/files for local assets ...
	I0609 01:41:43.311850  329232 start.go:270] post-start completed in 131.0789ms
	I0609 01:41:43.311895  329232 ssh_runner.go:149] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0609 01:41:43.311938  329232 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-20210609012809-9941
	I0609 01:41:43.351864  329232 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32990 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/machines/auto-20210609012809-9941/id_rsa Username:docker}
	I0609 01:41:43.439589  329232 fix.go:57] fixHost completed within 3m18.46145985s
	I0609 01:41:43.439614  329232 start.go:80] releasing machines lock for "auto-20210609012809-9941", held for 3m18.461506998s
	I0609 01:41:43.439689  329232 cli_runner.go:115] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" auto-20210609012809-9941
	I0609 01:41:43.480908  329232 ssh_runner.go:149] Run: sudo service containerd status
	I0609 01:41:43.480953  329232 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-20210609012809-9941
	I0609 01:41:43.480998  329232 ssh_runner.go:149] Run: curl -sS -m 2 https://k8s.gcr.io/
	I0609 01:41:43.481050  329232 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" auto-20210609012809-9941
	I0609 01:41:43.523337  329232 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32990 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/machines/auto-20210609012809-9941/id_rsa Username:docker}
	I0609 01:41:43.523672  329232 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32990 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/machines/auto-20210609012809-9941/id_rsa Username:docker}
	I0609 01:41:43.625901  329232 ssh_runner.go:149] Run: sudo systemctl cat docker.service
	I0609 01:41:43.634199  329232 cruntime.go:225] skipping containerd shutdown because we are bound to it
	I0609 01:41:43.634259  329232 ssh_runner.go:149] Run: sudo service crio status
	I0609 01:41:43.651967  329232 ssh_runner.go:149] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/dockershim.sock
	image-endpoint: unix:///var/run/dockershim.sock
	" | sudo tee /etc/crictl.yaml"
	I0609 01:41:43.663538  329232 ssh_runner.go:149] Run: sudo systemctl cat docker.service
	I0609 01:41:43.671774  329232 ssh_runner.go:149] Run: sudo service docker status
	I0609 01:41:43.685805  329232 ssh_runner.go:149] Run: docker version --format {{.Server.Version}}
	I0609 01:41:41.955318  344705 pod_ready.go:102] pod "kube-controller-manager-cilium-20210609012810-9941" in "kube-system" namespace has status "Ready":"False"
	I0609 01:41:44.454390  344705 pod_ready.go:102] pod "kube-controller-manager-cilium-20210609012810-9941" in "kube-system" namespace has status "Ready":"False"
	I0609 01:41:43.733795  329232 out.go:197] * Preparing Kubernetes v1.20.7 on Docker 20.10.7 ...
	I0609 01:41:43.733887  329232 cli_runner.go:115] Run: docker network inspect auto-20210609012809-9941 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0609 01:41:43.781233  329232 ssh_runner.go:149] Run: grep 192.168.76.1	host.minikube.internal$ /etc/hosts
	I0609 01:41:43.784669  329232 ssh_runner.go:149] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.76.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0609 01:41:43.794580  329232 localpath.go:92] copying /home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/client.crt -> /home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/profiles/auto-20210609012809-9941/client.crt
	I0609 01:41:43.794703  329232 localpath.go:117] copying /home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/client.key -> /home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/profiles/auto-20210609012809-9941/client.key
	I0609 01:41:43.794837  329232 preload.go:110] Checking if preload exists for k8s version v1.20.7 and runtime docker
	I0609 01:41:43.794899  329232 ssh_runner.go:149] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0609 01:41:43.836439  329232 docker.go:535] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-proxy:v1.20.7
	k8s.gcr.io/kube-controller-manager:v1.20.7
	k8s.gcr.io/kube-apiserver:v1.20.7
	k8s.gcr.io/kube-scheduler:v1.20.7
	gcr.io/k8s-minikube/storage-provisioner:v5
	kubernetesui/dashboard:v2.1.0
	k8s.gcr.io/etcd:3.4.13-0
	k8s.gcr.io/coredns:1.7.0
	kubernetesui/metrics-scraper:v1.0.4
	k8s.gcr.io/pause:3.2
	
	-- /stdout --
	I0609 01:41:43.836465  329232 docker.go:466] Images already preloaded, skipping extraction
	I0609 01:41:43.836518  329232 ssh_runner.go:149] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0609 01:41:43.874900  329232 docker.go:535] Got preloaded images: -- stdout --
	k8s.gcr.io/kube-proxy:v1.20.7
	k8s.gcr.io/kube-controller-manager:v1.20.7
	k8s.gcr.io/kube-apiserver:v1.20.7
	k8s.gcr.io/kube-scheduler:v1.20.7
	gcr.io/k8s-minikube/storage-provisioner:v5
	kubernetesui/dashboard:v2.1.0
	k8s.gcr.io/etcd:3.4.13-0
	k8s.gcr.io/coredns:1.7.0
	kubernetesui/metrics-scraper:v1.0.4
	k8s.gcr.io/pause:3.2
	
	-- /stdout --
	I0609 01:41:43.874929  329232 cache_images.go:74] Images are preloaded, skipping loading
	I0609 01:41:43.874987  329232 ssh_runner.go:149] Run: docker info --format {{.CgroupDriver}}
	I0609 01:41:43.959341  329232 cni.go:93] Creating CNI manager for ""
	I0609 01:41:43.959363  329232 cni.go:167] CNI unnecessary in this configuration, recommending no CNI
	I0609 01:41:43.959373  329232 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0609 01:41:43.959385  329232 kubeadm.go:153] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.76.2 APIServerPort:8443 KubernetesVersion:v1.20.7 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:auto-20210609012809-9941 NodeName:auto-20210609012809-9941 DNSDomain:cluster.local CRISocket:/var/run/dockershim.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.76.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NoTaintMaster:true NodeIP:192.168.76.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/miniku
be/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[]}
	I0609 01:41:43.959528  329232 kubeadm.go:157] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.76.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/dockershim.sock
	  name: "auto-20210609012809-9941"
	  kubeletExtraArgs:
	    node-ip: 192.168.76.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.76.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.7
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	
	I0609 01:41:43.959623  329232 kubeadm.go:909] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.7/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=docker --hostname-override=auto-20210609012809-9941 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.76.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.7 ClusterName:auto-20210609012809-9941 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0609 01:41:43.959678  329232 ssh_runner.go:149] Run: sudo ls /var/lib/minikube/binaries/v1.20.7
	I0609 01:41:43.966644  329232 binaries.go:44] Found k8s binaries, skipping transfer
	I0609 01:41:43.966767  329232 ssh_runner.go:149] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /var/lib/minikube /etc/init.d
	I0609 01:41:43.973306  329232 ssh_runner.go:316] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (350 bytes)
	I0609 01:41:43.985377  329232 ssh_runner.go:316] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0609 01:41:43.996832  329232 ssh_runner.go:316] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (1883 bytes)
	I0609 01:41:44.008194  329232 ssh_runner.go:316] scp memory --> /var/lib/minikube/openrc-restart-wrapper.sh (233 bytes)
	I0609 01:41:44.019580  329232 ssh_runner.go:316] scp memory --> /etc/init.d/kubelet (839 bytes)
	I0609 01:41:44.031187  329232 ssh_runner.go:149] Run: grep 192.168.76.2	control-plane.minikube.internal$ /etc/hosts
	I0609 01:41:44.033902  329232 ssh_runner.go:149] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.76.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0609 01:41:44.042089  329232 certs.go:52] Setting up /home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/profiles/auto-20210609012809-9941 for IP: 192.168.76.2
	I0609 01:41:44.042136  329232 certs.go:179] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/ca.key
	I0609 01:41:44.042171  329232 certs.go:179] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/proxy-client-ca.key
	I0609 01:41:44.042229  329232 certs.go:290] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/profiles/auto-20210609012809-9941/client.key
	I0609 01:41:44.042250  329232 certs.go:294] generating minikube signed cert: /home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/profiles/auto-20210609012809-9941/apiserver.key.31bdca25
	I0609 01:41:44.042257  329232 crypto.go:69] Generating cert /home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/profiles/auto-20210609012809-9941/apiserver.crt.31bdca25 with IP's: [192.168.76.2 10.96.0.1 127.0.0.1 10.0.0.1]
	I0609 01:41:44.226573  329232 crypto.go:157] Writing cert to /home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/profiles/auto-20210609012809-9941/apiserver.crt.31bdca25 ...
	I0609 01:41:44.226606  329232 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/profiles/auto-20210609012809-9941/apiserver.crt.31bdca25: {Name:mk90ec242a66bfd79902e518464ceb62421bad6d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0609 01:41:44.226771  329232 crypto.go:165] Writing key to /home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/profiles/auto-20210609012809-9941/apiserver.key.31bdca25 ...
	I0609 01:41:44.226783  329232 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/profiles/auto-20210609012809-9941/apiserver.key.31bdca25: {Name:mkfae0a3bd896dd88f44a8261ced590d5cf2eaf3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0609 01:41:44.226857  329232 certs.go:305] copying /home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/profiles/auto-20210609012809-9941/apiserver.crt.31bdca25 -> /home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/profiles/auto-20210609012809-9941/apiserver.crt
	I0609 01:41:44.226912  329232 certs.go:309] copying /home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/profiles/auto-20210609012809-9941/apiserver.key.31bdca25 -> /home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/profiles/auto-20210609012809-9941/apiserver.key
	I0609 01:41:44.226968  329232 certs.go:294] generating aggregator signed cert: /home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/profiles/auto-20210609012809-9941/proxy-client.key
	I0609 01:41:44.226982  329232 crypto.go:69] Generating cert /home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/profiles/auto-20210609012809-9941/proxy-client.crt with IP's: []
	I0609 01:41:44.493832  329232 crypto.go:157] Writing cert to /home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/profiles/auto-20210609012809-9941/proxy-client.crt ...
	I0609 01:41:44.493863  329232 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/profiles/auto-20210609012809-9941/proxy-client.crt: {Name:mkb1a9418c2d79591044d594bd7bb611a67d607c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0609 01:41:44.494045  329232 crypto.go:165] Writing key to /home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/profiles/auto-20210609012809-9941/proxy-client.key ...
	I0609 01:41:44.494060  329232 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/profiles/auto-20210609012809-9941/proxy-client.key: {Name:mkadb2ec9513a5b1c87d24f9a0d9353126c956ab Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0609 01:41:44.494231  329232 certs.go:369] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/certs/9941.pem (1338 bytes)
	W0609 01:41:44.494272  329232 certs.go:365] ignoring /home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/certs/9941_empty.pem, impossibly tiny 0 bytes
	I0609 01:41:44.494299  329232 certs.go:369] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/certs/ca-key.pem (1675 bytes)
	I0609 01:41:44.494326  329232 certs.go:369] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/certs/ca.pem (1082 bytes)
	I0609 01:41:44.494386  329232 certs.go:369] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/certs/cert.pem (1123 bytes)
	I0609 01:41:44.494417  329232 certs.go:369] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/certs/key.pem (1679 bytes)
	I0609 01:41:44.495301  329232 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/profiles/auto-20210609012809-9941/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0609 01:41:44.513759  329232 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/profiles/auto-20210609012809-9941/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0609 01:41:44.556375  329232 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/profiles/auto-20210609012809-9941/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0609 01:41:44.574638  329232 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/profiles/auto-20210609012809-9941/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0609 01:41:44.590891  329232 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0609 01:41:44.607761  329232 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0609 01:41:44.624984  329232 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0609 01:41:44.641979  329232 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0609 01:41:44.661420  329232 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/certs/9941.pem --> /usr/share/ca-certificates/9941.pem (1338 bytes)
	I0609 01:41:44.679420  329232 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0609 01:41:44.697286  329232 ssh_runner.go:316] scp memory --> /var/lib/minikube/kubeconfig (752 bytes)
	I0609 01:41:44.709772  329232 ssh_runner.go:149] Run: openssl version
	I0609 01:41:44.714441  329232 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/9941.pem && ln -fs /usr/share/ca-certificates/9941.pem /etc/ssl/certs/9941.pem"
	I0609 01:41:44.721420  329232 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/9941.pem
	I0609 01:41:44.724999  329232 certs.go:410] hashing: -rw-r--r-- 1 root root 1338 Jun  9 01:04 /usr/share/ca-certificates/9941.pem
	I0609 01:41:44.725051  329232 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/9941.pem
	I0609 01:41:44.730221  329232 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/9941.pem /etc/ssl/certs/51391683.0"
	I0609 01:41:44.738018  329232 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0609 01:41:44.744990  329232 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0609 01:41:44.747847  329232 certs.go:410] hashing: -rw-r--r-- 1 root root 1111 Jun  9 00:58 /usr/share/ca-certificates/minikubeCA.pem
	I0609 01:41:44.747885  329232 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0609 01:41:44.752327  329232 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0609 01:41:44.759007  329232 kubeadm.go:390] StartCluster: {Name:auto-20210609012809-9941 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.23@sha256:baf6d94b2050bcbecd98994e265cf965a4f4768978620ccf5227a6dcb75ade45 Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.7 ClusterName:auto-20210609012809-9941 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDoma
in:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.76.2 Port:8443 KubernetesVersion:v1.20.7 ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:5m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false}
	I0609 01:41:44.759106  329232 ssh_runner.go:149] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0609 01:41:44.801843  329232 ssh_runner.go:149] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0609 01:41:44.810329  329232 ssh_runner.go:149] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0609 01:41:44.818129  329232 kubeadm.go:220] ignoring SystemVerification for kubeadm because of docker driver
	I0609 01:41:44.818183  329232 ssh_runner.go:149] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0609 01:41:44.825259  329232 kubeadm.go:151] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0609 01:41:44.825307  329232 ssh_runner.go:240] Start: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.20.7:$PATH kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0609 01:41:43.875536  352096 pod_ready.go:102] pod "calico-kube-controllers-55ffdb7658-gltlk" in "kube-system" namespace has status "Ready":"False"
	I0609 01:41:46.376745  352096 pod_ready.go:102] pod "calico-kube-controllers-55ffdb7658-gltlk" in "kube-system" namespace has status "Ready":"False"
	I0609 01:41:45.588110  329232 out.go:197]   - Generating certificates and keys ...
	I0609 01:41:48.300953  300573 system_pods.go:86] 8 kube-system pods found
	I0609 01:41:48.300985  300573 system_pods.go:89] "coredns-fb8b8dccf-ctgrx" [adffdf7a-c8c3-11eb-a78f-02427f02d9a2] Running
	I0609 01:41:48.300993  300573 system_pods.go:89] "etcd-old-k8s-version-20210609012901-9941" [d1de6264-c8c3-11eb-a78f-02427f02d9a2] Running
	I0609 01:41:48.301000  300573 system_pods.go:89] "kube-apiserver-old-k8s-version-20210609012901-9941" [c8550af5-c8c3-11eb-a78f-02427f02d9a2] Running
	I0609 01:41:48.301006  300573 system_pods.go:89] "kube-controller-manager-old-k8s-version-20210609012901-9941" [c8ed9fe0-c8c3-11eb-a78f-02427f02d9a2] Running
	I0609 01:41:48.301013  300573 system_pods.go:89] "kube-proxy-97rr9" [ae0f0e5a-c8c3-11eb-a78f-02427f02d9a2] Running
	I0609 01:41:48.301020  300573 system_pods.go:89] "kube-scheduler-old-k8s-version-20210609012901-9941" [c768fa6f-c8c3-11eb-a78f-02427f02d9a2] Running
	I0609 01:41:48.301031  300573 system_pods.go:89] "metrics-server-8546d8b77b-lqx7b" [afea2287-c8c3-11eb-a78f-02427f02d9a2] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0609 01:41:48.301043  300573 system_pods.go:89] "storage-provisioner" [af32324c-c8c3-11eb-a78f-02427f02d9a2] Running
	I0609 01:41:48.301053  300573 system_pods.go:126] duration metric: took 56.76990207s to wait for k8s-apps to be running ...
	I0609 01:41:48.301068  300573 system_svc.go:44] waiting for kubelet service to be running ....
	I0609 01:41:48.301114  300573 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service kubelet
	I0609 01:41:48.310381  300573 system_svc.go:56] duration metric: took 9.307261ms WaitForService to wait for kubelet.
	I0609 01:41:48.310405  300573 kubeadm.go:547] duration metric: took 1m14.727322076s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0609 01:41:48.310424  300573 node_conditions.go:102] verifying NodePressure condition ...
	I0609 01:41:48.312372  300573 node_conditions.go:122] node storage ephemeral capacity is 309568300Ki
	I0609 01:41:48.312391  300573 node_conditions.go:123] node cpu capacity is 8
	I0609 01:41:48.312404  300573 node_conditions.go:105] duration metric: took 1.974952ms to run NodePressure ...
	I0609 01:41:48.312415  300573 start.go:219] waiting for startup goroutines ...
	I0609 01:41:48.356569  300573 start.go:463] kubectl: 1.20.5, cluster: 1.14.0 (minor skew: 6)
	I0609 01:41:48.358565  300573 out.go:170] 
	W0609 01:41:48.358730  300573 out.go:235] ! /usr/local/bin/kubectl is version 1.20.5, which may have incompatibilites with Kubernetes 1.14.0.
	I0609 01:41:48.360236  300573 out.go:170]   - Want kubectl v1.14.0? Try 'minikube kubectl -- get pods -A'
	I0609 01:41:48.361792  300573 out.go:170] * Done! kubectl is now configured to use "old-k8s-version-20210609012901-9941" cluster and "default" namespace by default
	I0609 01:41:46.954352  344705 pod_ready.go:102] pod "kube-controller-manager-cilium-20210609012810-9941" in "kube-system" namespace has status "Ready":"False"
	I0609 01:41:48.955130  344705 pod_ready.go:102] pod "kube-controller-manager-cilium-20210609012810-9941" in "kube-system" namespace has status "Ready":"False"
	I0609 01:41:47.875252  352096 pod_ready.go:92] pod "calico-kube-controllers-55ffdb7658-gltlk" in "kube-system" namespace has status "Ready":"True"
	I0609 01:41:47.875281  352096 pod_ready.go:81] duration metric: took 28.515609073s waiting for pod "calico-kube-controllers-55ffdb7658-gltlk" in "kube-system" namespace to be "Ready" ...
	I0609 01:41:47.875297  352096 pod_ready.go:78] waiting up to 5m0s for pod "calico-node-8bhjk" in "kube-system" namespace to be "Ready" ...
	I0609 01:41:49.886712  352096 pod_ready.go:92] pod "calico-node-8bhjk" in "kube-system" namespace has status "Ready":"True"
	I0609 01:41:49.886740  352096 pod_ready.go:81] duration metric: took 2.011435025s waiting for pod "calico-node-8bhjk" in "kube-system" namespace to be "Ready" ...
	I0609 01:41:49.886752  352096 pod_ready.go:78] waiting up to 5m0s for pod "coredns-74ff55c5b-kngs5" in "kube-system" namespace to be "Ready" ...
	I0609 01:41:47.864552  329232 out.go:197]   - Booting up control plane ...
	I0609 01:41:50.955197  344705 pod_ready.go:102] pod "kube-controller-manager-cilium-20210609012810-9941" in "kube-system" namespace has status "Ready":"False"
	I0609 01:41:53.456163  344705 pod_ready.go:102] pod "kube-controller-manager-cilium-20210609012810-9941" in "kube-system" namespace has status "Ready":"False"
	I0609 01:41:51.896789  352096 pod_ready.go:92] pod "coredns-74ff55c5b-kngs5" in "kube-system" namespace has status "Ready":"True"
	I0609 01:41:51.896811  352096 pod_ready.go:81] duration metric: took 2.010052283s waiting for pod "coredns-74ff55c5b-kngs5" in "kube-system" namespace to be "Ready" ...
	I0609 01:41:51.896821  352096 pod_ready.go:78] waiting up to 5m0s for pod "coredns-74ff55c5b-qc224" in "kube-system" namespace to be "Ready" ...
	I0609 01:41:51.898882  352096 pod_ready.go:97] error getting pod "coredns-74ff55c5b-qc224" in "kube-system" namespace (skipping!): pods "coredns-74ff55c5b-qc224" not found
	I0609 01:41:51.898909  352096 pod_ready.go:81] duration metric: took 2.080404ms waiting for pod "coredns-74ff55c5b-qc224" in "kube-system" namespace to be "Ready" ...
	E0609 01:41:51.898919  352096 pod_ready.go:66] WaitExtra: waitPodCondition: error getting pod "coredns-74ff55c5b-qc224" in "kube-system" namespace (skipping!): pods "coredns-74ff55c5b-qc224" not found
	I0609 01:41:51.898928  352096 pod_ready.go:78] waiting up to 5m0s for pod "etcd-calico-20210609012810-9941" in "kube-system" namespace to be "Ready" ...
	I0609 01:41:53.907845  352096 pod_ready.go:102] pod "etcd-calico-20210609012810-9941" in "kube-system" namespace has status "Ready":"False"
	I0609 01:41:55.911876  352096 pod_ready.go:102] pod "etcd-calico-20210609012810-9941" in "kube-system" namespace has status "Ready":"False"
	I0609 01:41:55.954929  344705 pod_ready.go:102] pod "kube-controller-manager-cilium-20210609012810-9941" in "kube-system" namespace has status "Ready":"False"
	I0609 01:41:57.955126  344705 pod_ready.go:102] pod "kube-controller-manager-cilium-20210609012810-9941" in "kube-system" namespace has status "Ready":"False"
	I0609 01:41:59.956675  344705 pod_ready.go:102] pod "kube-controller-manager-cilium-20210609012810-9941" in "kube-system" namespace has status "Ready":"False"
	I0609 01:41:58.408965  352096 pod_ready.go:102] pod "etcd-calico-20210609012810-9941" in "kube-system" namespace has status "Ready":"False"
	I0609 01:42:00.909845  352096 pod_ready.go:102] pod "etcd-calico-20210609012810-9941" in "kube-system" namespace has status "Ready":"False"
	I0609 01:41:57.536931  329232 out.go:197]   - Configuring RBAC rules ...
	I0609 01:41:57.950447  329232 cni.go:93] Creating CNI manager for ""
	I0609 01:41:57.950472  329232 cni.go:167] CNI unnecessary in this configuration, recommending no CNI
	I0609 01:41:57.950504  329232 ssh_runner.go:149] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0609 01:41:57.950565  329232 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.20.7/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0609 01:41:57.950588  329232 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.20.7/kubectl label nodes minikube.k8s.io/version=v1.21.0-beta.0 minikube.k8s.io/commit=d4601e4184ba947b7d077d7d426c2bca79bbf9fc minikube.k8s.io/name=auto-20210609012809-9941 minikube.k8s.io/updated_at=2021_06_09T01_41_57_0700 --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0609 01:41:58.270674  329232 ops.go:34] apiserver oom_adj: -16
	I0609 01:41:58.270873  329232 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.20.7/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0609 01:41:58.834789  329232 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.20.7/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0609 01:41:59.334848  329232 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.20.7/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0609 01:41:59.834836  329232 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.20.7/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0609 01:42:00.334592  329232 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.20.7/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0609 01:42:00.835312  329232 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.20.7/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0609 01:42:01.335240  329232 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.20.7/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0609 01:42:01.834799  329232 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.20.7/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0609 01:42:02.334849  329232 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.20.7/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	
	* 
	* ==> Docker <==
	* -- Logs begin at Wed 2021-06-09 01:34:39 UTC, end at Wed 2021-06-09 01:42:05 UTC. --
	Jun 09 01:40:02 old-k8s-version-20210609012901-9941 dockerd[203]: time="2021-06-09T01:40:02.779605017Z" level=info msg="ignoring event" container=cc0aca83efeca0d2b5a6380f0035838137a5ddede617bb12397795175054b95c module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 09 01:40:03 old-k8s-version-20210609012901-9941 dockerd[203]: time="2021-06-09T01:40:03.115851734Z" level=info msg="ignoring event" container=5e67ef29fd782e6882093cefc8d1b2e4e6502289a8aab7eb602baa78ff03d4df module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 09 01:40:03 old-k8s-version-20210609012901-9941 dockerd[203]: time="2021-06-09T01:40:03.244359054Z" level=info msg="ignoring event" container=647284240c9b3ff26c1e5d787021349e374f04b87d9f0c78f0972878ca393ce7 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 09 01:40:03 old-k8s-version-20210609012901-9941 dockerd[203]: time="2021-06-09T01:40:03.376184625Z" level=info msg="ignoring event" container=8a1abb294bc93b7aeb07164f4e6a549e477648e117418f2e94e2b62b742a603f module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 09 01:40:03 old-k8s-version-20210609012901-9941 dockerd[203]: time="2021-06-09T01:40:03.503253921Z" level=info msg="ignoring event" container=a8f1d2a6258c19eb81fe707363ba95a59689f2623e07e372b5f44056f81b71b6 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 09 01:40:03 old-k8s-version-20210609012901-9941 dockerd[203]: time="2021-06-09T01:40:03.655460364Z" level=info msg="ignoring event" container=0a42e38b95e96fac8c84fbd6415b07279c3f7b4dc175292ee03bf72f93504bff module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 09 01:40:03 old-k8s-version-20210609012901-9941 dockerd[203]: time="2021-06-09T01:40:03.868060101Z" level=info msg="ignoring event" container=8f37f3879958d7bcfb1fb37da48178584862829d0f9ab46e57d49320f37fc3f8 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 09 01:40:04 old-k8s-version-20210609012901-9941 dockerd[203]: time="2021-06-09T01:40:04.043079624Z" level=info msg="ignoring event" container=83d747333959a40a15d16276795b19088263280ab507d0e39ebf3009f9cd7290 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 09 01:40:04 old-k8s-version-20210609012901-9941 dockerd[203]: time="2021-06-09T01:40:04.194657529Z" level=info msg="ignoring event" container=76c2df28bafa15f4875a399fd3f8bde03a6e76c0e021ffe56eb96ee35045923f module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 09 01:40:36 old-k8s-version-20210609012901-9941 dockerd[203]: time="2021-06-09T01:40:36.611806519Z" level=warning msg="Your kernel does not support swap limit capabilities or the cgroup is not mounted. Memory limited without swap."
	Jun 09 01:40:37 old-k8s-version-20210609012901-9941 dockerd[203]: time="2021-06-09T01:40:37.093237111Z" level=warning msg="reference for unknown type: application/vnd.docker.distribution.manifest.v1+prettyjws" digest="sha256:5d99aa1120524c801bc8c1a7077e8f5ec122ba16b6dda1a5d3826057f67b9bcb" remote="k8s.gcr.io/echoserver:1.4"
	Jun 09 01:40:37 old-k8s-version-20210609012901-9941 dockerd[203]: time="2021-06-09T01:40:37.256429752Z" level=warning msg="Error persisting manifest" digest="sha256:5d99aa1120524c801bc8c1a7077e8f5ec122ba16b6dda1a5d3826057f67b9bcb" error="error committing manifest to content store: commit failed: unexpected commit digest sha256:eaee4c452b076cdb05b391ed7e75e1ad0aca136665875ab5d7e2f3d9f4675769, expected sha256:5d99aa1120524c801bc8c1a7077e8f5ec122ba16b6dda1a5d3826057f67b9bcb: failed precondition" remote="k8s.gcr.io/echoserver:1.4"
	Jun 09 01:40:41 old-k8s-version-20210609012901-9941 dockerd[203]: time="2021-06-09T01:40:41.432301024Z" level=warning msg="Error getting v2 registry: Get https://fake.domain/v2/: dial tcp: lookup fake.domain on 192.168.67.1:53: no such host"
	Jun 09 01:40:41 old-k8s-version-20210609012901-9941 dockerd[203]: time="2021-06-09T01:40:41.432343163Z" level=info msg="Attempting next endpoint for pull after error: Get https://fake.domain/v2/: dial tcp: lookup fake.domain on 192.168.67.1:53: no such host"
	Jun 09 01:40:41 old-k8s-version-20210609012901-9941 dockerd[203]: time="2021-06-09T01:40:41.433989922Z" level=error msg="Handler for POST /v1.38/images/create returned error: Get https://fake.domain/v2/: dial tcp: lookup fake.domain on 192.168.67.1:53: no such host"
	Jun 09 01:40:41 old-k8s-version-20210609012901-9941 dockerd[203]: time="2021-06-09T01:40:41.749379613Z" level=info msg="ignoring event" container=209b2f1f12c840e229b4ae712cd7def2451c3e705cd6cf899ed05d4cae0c0929 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 09 01:40:43 old-k8s-version-20210609012901-9941 dockerd[203]: time="2021-06-09T01:40:43.034860759Z" level=info msg="ignoring event" container=e15298565a01a44ba2e81fbb337da50279e879415a5091222be3a5e36aee08d7 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 09 01:40:57 old-k8s-version-20210609012901-9941 dockerd[203]: time="2021-06-09T01:40:57.032186534Z" level=warning msg="Error getting v2 registry: Get https://fake.domain/v2/: dial tcp: lookup fake.domain on 192.168.67.1:53: no such host"
	Jun 09 01:40:57 old-k8s-version-20210609012901-9941 dockerd[203]: time="2021-06-09T01:40:57.032222718Z" level=info msg="Attempting next endpoint for pull after error: Get https://fake.domain/v2/: dial tcp: lookup fake.domain on 192.168.67.1:53: no such host"
	Jun 09 01:40:57 old-k8s-version-20210609012901-9941 dockerd[203]: time="2021-06-09T01:40:57.041807409Z" level=error msg="Handler for POST /v1.38/images/create returned error: Get https://fake.domain/v2/: dial tcp: lookup fake.domain on 192.168.67.1:53: no such host"
	Jun 09 01:41:01 old-k8s-version-20210609012901-9941 dockerd[203]: time="2021-06-09T01:41:01.346826619Z" level=info msg="ignoring event" container=417a2459ca5d2c0a4e1befd352a48e44dc91fb4015fe574d929d8c1097ce09cd module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jun 09 01:41:27 old-k8s-version-20210609012901-9941 dockerd[203]: time="2021-06-09T01:41:27.038495294Z" level=warning msg="Error getting v2 registry: Get https://fake.domain/v2/: dial tcp: lookup fake.domain on 192.168.67.1:53: no such host"
	Jun 09 01:41:27 old-k8s-version-20210609012901-9941 dockerd[203]: time="2021-06-09T01:41:27.038537670Z" level=info msg="Attempting next endpoint for pull after error: Get https://fake.domain/v2/: dial tcp: lookup fake.domain on 192.168.67.1:53: no such host"
	Jun 09 01:41:27 old-k8s-version-20210609012901-9941 dockerd[203]: time="2021-06-09T01:41:27.040714461Z" level=error msg="Handler for POST /v1.38/images/create returned error: Get https://fake.domain/v2/: dial tcp: lookup fake.domain on 192.168.67.1:53: no such host"
	Jun 09 01:41:34 old-k8s-version-20210609012901-9941 dockerd[203]: time="2021-06-09T01:41:34.345802355Z" level=info msg="ignoring event" container=0a878f155b99161e7c0c238df1d2ea55fb150f549896a43282d60c2825d2e0ca module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE               CREATED              STATE               NAME                        ATTEMPT             POD ID
	0a878f155b991       a90209bb39e3d       31 seconds ago       Exited              dashboard-metrics-scraper   3                   7b28bd8313edd
	9230420d066a0       9a07b5b4bfac0       About a minute ago   Running             kubernetes-dashboard        0                   52cb0877bbe76
	80656451acc2e       eb516548c180f       About a minute ago   Running             coredns                     0                   b82c08bb91986
	d27ec4783cae5       6e38f40d628db       About a minute ago   Running             storage-provisioner         0                   3c840dfa16845
	ef3565ebed501       5cd54e388abaf       About a minute ago   Running             kube-proxy                  0                   facebb8dc382e
	15294a1b99e50       00638a24688b0       About a minute ago   Running             kube-scheduler              0                   9113a9c371341
	76559266dc96c       b95b1efa0436b       About a minute ago   Running             kube-controller-manager     0                   5c8b321c5839a
	557ff658123d4       2c4adeb21b4ff       About a minute ago   Running             etcd                        0                   4d98c28eb4819
	7435c96f89723       ecf910f40d6e0       About a minute ago   Running             kube-apiserver              0                   553d498b0da82
	
	* 
	* ==> coredns [80656451acc2] <==
	* .:53
	2021-06-09T01:40:37.071Z [INFO] CoreDNS-1.3.1
	2021-06-09T01:40:37.071Z [INFO] linux/amd64, go1.11.4, 6b56a9c
	CoreDNS-1.3.1
	linux/amd64, go1.11.4, 6b56a9c
	2021-06-09T01:40:37.071Z [INFO] plugin/reload: Running configuration MD5 = d7336ec3b7f1205cfa0fef85b62c291b
	
	* 
	* ==> describe nodes <==
	* Name:               old-k8s-version-20210609012901-9941
	Roles:              master
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=old-k8s-version-20210609012901-9941
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=d4601e4184ba947b7d077d7d426c2bca79bbf9fc
	                    minikube.k8s.io/name=old-k8s-version-20210609012901-9941
	                    minikube.k8s.io/updated_at=2021_06_09T01_40_17_0700
	                    minikube.k8s.io/version=v1.21.0-beta.0
	                    node-role.kubernetes.io/master=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /var/run/dockershim.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Wed, 09 Jun 2021 01:40:13 +0000
	Taints:             <none>
	Unschedulable:      false
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Wed, 09 Jun 2021 01:41:13 +0000   Wed, 09 Jun 2021 01:40:08 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Wed, 09 Jun 2021 01:41:13 +0000   Wed, 09 Jun 2021 01:40:08 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Wed, 09 Jun 2021 01:41:13 +0000   Wed, 09 Jun 2021 01:40:08 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Wed, 09 Jun 2021 01:41:13 +0000   Wed, 09 Jun 2021 01:40:08 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.67.2
	  Hostname:    old-k8s-version-20210609012901-9941
	Capacity:
	 cpu:                8
	 ephemeral-storage:  309568300Ki
	 hugepages-1Gi:      0
	 hugepages-2Mi:      0
	 memory:             32951376Ki
	 pods:               110
	Allocatable:
	 cpu:                8
	 ephemeral-storage:  309568300Ki
	 hugepages-1Gi:      0
	 hugepages-2Mi:      0
	 memory:             32951376Ki
	 pods:               110
	System Info:
	 Machine ID:                 b77ec962e3734760b1e756ffc5e83152
	 System UUID:                fcb82c90-e30d-41cf-83d7-0b244092491c
	 Boot ID:                    e08f76ce-1642-432a-8e61-95aaa19183a7
	 Kernel Version:             4.9.0-15-amd64
	 OS Image:                   Ubuntu 20.04.2 LTS
	 Operating System:           linux
	 Architecture:               amd64
	 Container Runtime Version:  docker://20.10.7
	 Kubelet Version:            v1.14.0
	 Kube-Proxy Version:         v1.14.0
	PodCIDR:                     10.244.0.0/24
	Non-terminated Pods:         (10 in total)
	  Namespace                  Name                                                           CPU Requests  CPU Limits  Memory Requests  Memory Limits  AGE
	  ---------                  ----                                                           ------------  ----------  ---------------  -------------  ---
	  kube-system                coredns-fb8b8dccf-ctgrx                                        100m (1%!)(MISSING)     0 (0%!)(MISSING)      70Mi (0%!)(MISSING)        170Mi (0%!)(MISSING)     93s
	  kube-system                etcd-old-k8s-version-20210609012901-9941                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         32s
	  kube-system                kube-apiserver-old-k8s-version-20210609012901-9941             250m (3%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         48s
	  kube-system                kube-controller-manager-old-k8s-version-20210609012901-9941    200m (2%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         47s
	  kube-system                kube-proxy-97rr9                                               0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         93s
	  kube-system                kube-scheduler-old-k8s-version-20210609012901-9941             100m (1%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         50s
	  kube-system                metrics-server-8546d8b77b-lqx7b                                100m (1%!)(MISSING)     0 (0%!)(MISSING)      300Mi (0%!)(MISSING)       0 (0%!)(MISSING)         89s
	  kube-system                storage-provisioner                                            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         91s
	  kubernetes-dashboard       dashboard-metrics-scraper-5b494cc544-529qb                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         90s
	  kubernetes-dashboard       kubernetes-dashboard-5d8978d65d-5c7t7                          0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         90s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (9%!)(MISSING)   0 (0%!)(MISSING)
	  memory             370Mi (1%!)(MISSING)  170Mi (0%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                  From                                             Message
	  ----    ------                   ----                 ----                                             -------
	  Normal  Starting                 118s                 kubelet, old-k8s-version-20210609012901-9941     Starting kubelet.
	  Normal  NodeHasSufficientMemory  118s (x8 over 118s)  kubelet, old-k8s-version-20210609012901-9941     Node old-k8s-version-20210609012901-9941 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    118s (x8 over 118s)  kubelet, old-k8s-version-20210609012901-9941     Node old-k8s-version-20210609012901-9941 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     118s (x7 over 118s)  kubelet, old-k8s-version-20210609012901-9941     Node old-k8s-version-20210609012901-9941 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  118s                 kubelet, old-k8s-version-20210609012901-9941     Updated Node Allocatable limit across pods
	  Normal  Starting                 90s                  kube-proxy, old-k8s-version-20210609012901-9941  Starting kube-proxy.
	
	* 
	* ==> dmesg <==
	* [  +1.658653] IPv4: martian source 10.85.0.4 from 10.85.0.4, on dev eth0
	[  +0.000002] ll header: 00000000: ff ff ff ff ff ff ce 5c c6 1f 63 8a 08 06        .......\..c...
	[  +0.004022] IPv4: martian source 10.85.0.5 from 10.85.0.5, on dev eth0
	[  +0.000001] ll header: 00000000: ff ff ff ff ff ff 0e 5d 4b c1 e0 ed 08 06        .......]K.....
	[  +2.140856] IPv4: martian source 10.85.0.6 from 10.85.0.6, on dev eth0
	[  +0.000002] ll header: 00000000: ff ff ff ff ff ff 3e a3 2b db cb b6 08 06        ......>.+.....
	[  +0.147751] IPv4: martian source 10.85.0.7 from 10.85.0.7, on dev eth0
	[  +0.000002] ll header: 00000000: ff ff ff ff ff ff 9a f2 40 59 da 87 08 06        ........@Y....
	[  +2.083360] IPv4: martian source 10.85.0.8 from 10.85.0.8, on dev eth0
	[  +0.000001] ll header: 00000000: ff ff ff ff ff ff 56 9d 71 18 33 dd 08 06        ......V.q.3...
	[  +0.000616] IPv4: martian source 10.85.0.9 from 10.85.0.9, on dev eth0
	[  +0.000002] ll header: 00000000: ff ff ff ff ff ff 8e 8d b3 62 b0 07 08 06        .........b....
	[  +1.714381] IPv4: martian source 10.85.0.10 from 10.85.0.10, on dev eth0
	[  +0.000002] ll header: 00000000: ff ff ff ff ff ff 9e d1 b5 da bf 05 08 06        ..............
	[  +0.003822] IPv4: martian source 10.85.0.11 from 10.85.0.11, on dev eth0
	[  +0.000002] ll header: 00000000: ff ff ff ff ff ff 92 3a 5c 13 9f 7c 08 06        .......:\..|..
	[  +0.920701] IPv4: martian source 10.85.0.12 from 10.85.0.12, on dev eth0
	[  +0.000003] ll header: 00000000: ff ff ff ff ff ff d2 50 1c d3 1f 17 08 06        .......P......
	[  +0.002962] IPv4: martian source 10.85.0.13 from 10.85.0.13, on dev eth0
	[  +0.000002] ll header: 00000000: ff ff ff ff ff ff 86 09 69 5a 94 d2 08 06        ........iZ....
	[  +0.999987] IPv4: martian source 10.85.0.14 from 10.85.0.14, on dev eth0
	[  +0.000002] ll header: 00000000: ff ff ff ff ff ff fa 88 03 51 34 f3 08 06        .........Q4...
	[  +0.004235] IPv4: martian source 10.85.0.15 from 10.85.0.15, on dev eth0
	[  +0.000002] ll header: 00000000: ff ff ff ff ff ff f6 25 39 34 91 f2 08 06        .......%!.(MISSING)..
	[  +6.380947] cgroup: cgroup2: unknown option "nsdelegate"
	
	* 
	* ==> etcd [557ff658123d] <==
	* 2021-06-09 01:40:48.647414 W | wal: sync duration of 1.103904697s, expected less than 1s
	2021-06-09 01:40:48.753091 W | etcdserver: request "header:<ID:2289933000483394557 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/replicasets/kube-system/coredns-fb8b8dccf\" mod_revision:364 > success:<request_put:<key:\"/registry/replicasets/kube-system/coredns-fb8b8dccf\" value_size:1214 >> failure:<request_range:<key:\"/registry/replicasets/kube-system/coredns-fb8b8dccf\" > >>" with result "size:16" took too long (105.414042ms) to execute
	2021-06-09 01:40:48.753496 W | etcdserver: read-only range request "key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" " with result "range_response_count:1 size:765" took too long (250.229741ms) to execute
	2021-06-09 01:40:48.753722 W | etcdserver: read-only range request "key:\"/registry/pods/kube-system/coredns-fb8b8dccf-ctgrx\" " with result "range_response_count:1 size:1770" took too long (891.632545ms) to execute
	2021-06-09 01:40:50.467937 W | etcdserver: request "header:<ID:2289933000483394562 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" mod_revision:537 > success:<request_put:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" value_size:677 >> failure:<request_range:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" > >>" with result "size:16" took too long (1.08693209s) to execute
	2021-06-09 01:40:50.468037 W | etcdserver: read-only range request "key:\"/registry/health\" " with result "range_response_count:0 size:5" took too long (1.566131533s) to execute
	2021-06-09 01:40:50.468071 W | etcdserver: read-only range request "key:\"/registry/minions/old-k8s-version-20210609012901-9941\" " with result "range_response_count:1 size:3347" took too long (1.710868913s) to execute
	2021-06-09 01:40:50.468206 W | etcdserver: read-only range request "key:\"/registry/events/kubernetes-dashboard/dashboard-metrics-scraper-5b494cc544-529qb.1686c662e29f9611\" " with result "range_response_count:1 size:597" took too long (928.182072ms) to execute
	2021-06-09 01:40:51.483862 W | etcdserver: read-only range request "key:\"/registry/pods/kube-system/kube-proxy-97rr9\" " with result "range_response_count:1 size:2147" took too long (1.013095215s) to execute
	2021-06-09 01:41:12.976673 W | wal: sync duration of 1.117225227s, expected less than 1s
	2021-06-09 01:41:13.114230 W | etcdserver: read-only range request "key:\"/registry/minions/\" range_end:\"/registry/minions0\" " with result "range_response_count:1 size:3347" took too long (314.968585ms) to execute
	2021-06-09 01:41:13.114284 W | etcdserver: read-only range request "key:\"/registry/events/kube-system/metrics-server-8546d8b77b-lqx7b.1686c6626a0d515c\" " with result "range_response_count:1 size:550" took too long (1.100437486s) to execute
	2021-06-09 01:41:13.114371 W | etcdserver: read-only range request "key:\"/registry/pods/kube-system/\" range_end:\"/registry/pods/kube-system0\" " with result "range_response_count:4 size:7785" took too long (687.507808ms) to execute
	2021-06-09 01:41:13.114518 W | etcdserver: read-only range request "key:\"/registry/pods/kube-system/metrics-server-8546d8b77b-lqx7b\" " with result "range_response_count:1 size:1851" took too long (1.101558003s) to execute
	2021-06-09 01:41:13.114553 W | etcdserver: read-only range request "key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" " with result "range_response_count:1 size:765" took too long (579.387664ms) to execute
	2021-06-09 01:41:13.722674 W | etcdserver: read-only range request "key:\"/registry/events/kube-system/metrics-server-8546d8b77b-lqx7b.1686c6626a0d9249\" " with result "range_response_count:1 size:511" took too long (603.050028ms) to execute
	2021-06-09 01:41:13.722784 W | etcdserver: read-only range request "key:\"/registry/leases/kube-node-lease/old-k8s-version-20210609012901-9941\" " with result "range_response_count:1 size:395" took too long (601.855298ms) to execute
	2021-06-09 01:41:13.723059 W | etcdserver: read-only range request "key:\"/registry/namespaces/kube-node-lease\" " with result "range_response_count:1 size:187" took too long (573.108462ms) to execute
	2021-06-09 01:41:15.464247 W | etcdserver: read-only range request "key:\"/registry/limitranges/kube-system/\" range_end:\"/registry/limitranges/kube-system0\" " with result "range_response_count:0 size:5" took too long (1.450534843s) to execute
	2021-06-09 01:41:15.464304 W | etcdserver: read-only range request "key:\"/registry/rolebindings\" range_end:\"/registry/rolebindingt\" count_only:true " with result "range_response_count:0 size:7" took too long (166.55648ms) to execute
	2021-06-09 01:41:15.464595 W | etcdserver: read-only range request "key:\"/registry/ingress\" range_end:\"/registry/ingrest\" count_only:true " with result "range_response_count:0 size:5" took too long (144.856126ms) to execute
	2021-06-09 01:41:15.465036 W | etcdserver: read-only range request "key:\"/registry/clusterrolebindings\" range_end:\"/registry/clusterrolebindingt\" count_only:true " with result "range_response_count:0 size:7" took too long (1.527858302s) to execute
	2021-06-09 01:41:15.465734 W | etcdserver: read-only range request "key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" " with result "range_response_count:1 size:765" took too long (313.803884ms) to execute
	2021-06-09 01:41:37.088502 W | etcdserver: read-only range request "key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" " with result "range_response_count:1 size:765" took too long (579.483729ms) to execute
	2021-06-09 01:41:57.525183 W | etcdserver: read-only range request "key:\"/registry/jobs/\" range_end:\"/registry/jobs0\" limit:500 " with result "range_response_count:0 size:5" took too long (146.394885ms) to execute
	
	* 
	* ==> kernel <==
	*  01:42:05 up  1:24,  0 users,  load average: 4.91, 3.39, 2.63
	Linux old-k8s-version-20210609012901-9941 4.9.0-15-amd64 #1 SMP Debian 4.9.258-1 (2021-03-08) x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 20.04.2 LTS"
	
	* 
	* ==> kube-apiserver [7435c96f8972] <==
	* I0609 01:41:53.476131       1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000002
	I0609 01:41:54.476295       1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000001
	I0609 01:41:54.476431       1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000002
	I0609 01:41:55.476606       1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000001
	I0609 01:41:55.476735       1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000002
	I0609 01:41:56.476937       1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000001
	I0609 01:41:56.477102       1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000002
	I0609 01:41:57.477291       1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000001
	I0609 01:41:57.477429       1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000002
	I0609 01:41:58.477563       1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000001
	I0609 01:41:58.477715       1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000002
	I0609 01:41:59.477874       1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000001
	I0609 01:41:59.478011       1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000002
	I0609 01:42:00.478169       1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000001
	I0609 01:42:00.478301       1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000002
	I0609 01:42:01.478453       1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000001
	I0609 01:42:01.478583       1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000002
	I0609 01:42:02.478748       1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000001
	I0609 01:42:02.478888       1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000002
	I0609 01:42:03.479048       1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000001
	I0609 01:42:03.479199       1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000002
	I0609 01:42:04.479372       1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000001
	I0609 01:42:04.479523       1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000002
	I0609 01:42:05.479686       1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000001
	I0609 01:42:05.479844       1 controller.go:102] OpenAPI AggregationController: Processing item k8s_internal_local_delegation_chain_0000000002
	
	* 
	* ==> kube-controller-manager [76559266dc96] <==
	* I0609 01:40:35.350957       1 event.go:209] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kubernetes-dashboard", Name:"dashboard-metrics-scraper-5b494cc544", UID:"af75069a-c8c3-11eb-a78f-02427f02d9a2", APIVersion:"apps/v1", ResourceVersion:"427", FieldPath:""}): type: 'Warning' reason: 'FailedCreate' Error creating: pods "dashboard-metrics-scraper-5b494cc544-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	E0609 01:40:35.355715       1 replica_set.go:450] Sync "kubernetes-dashboard/dashboard-metrics-scraper-5b494cc544" failed with pods "dashboard-metrics-scraper-5b494cc544-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0609 01:40:35.359115       1 event.go:209] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"kubernetes-dashboard", Name:"kubernetes-dashboard", UID:"af7ffe92-c8c3-11eb-a78f-02427f02d9a2", APIVersion:"apps/v1", ResourceVersion:"431", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set kubernetes-dashboard-5d8978d65d to 1
	E0609 01:40:35.361941       1 replica_set.go:450] Sync "kubernetes-dashboard/dashboard-metrics-scraper-5b494cc544" failed with pods "dashboard-metrics-scraper-5b494cc544-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0609 01:40:35.362185       1 event.go:209] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kubernetes-dashboard", Name:"dashboard-metrics-scraper-5b494cc544", UID:"af75069a-c8c3-11eb-a78f-02427f02d9a2", APIVersion:"apps/v1", ResourceVersion:"432", FieldPath:""}): type: 'Warning' reason: 'FailedCreate' Error creating: pods "dashboard-metrics-scraper-5b494cc544-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0609 01:40:35.363976       1 event.go:209] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kubernetes-dashboard", Name:"kubernetes-dashboard-5d8978d65d", UID:"af80a0bd-c8c3-11eb-a78f-02427f02d9a2", APIVersion:"apps/v1", ResourceVersion:"435", FieldPath:""}): type: 'Warning' reason: 'FailedCreate' Error creating: pods "kubernetes-dashboard-5d8978d65d-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0609 01:40:35.365457       1 event.go:209] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kubernetes-dashboard", Name:"dashboard-metrics-scraper-5b494cc544", UID:"af75069a-c8c3-11eb-a78f-02427f02d9a2", APIVersion:"apps/v1", ResourceVersion:"432", FieldPath:""}): type: 'Warning' reason: 'FailedCreate' Error creating: pods "dashboard-metrics-scraper-5b494cc544-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	E0609 01:40:35.365465       1 replica_set.go:450] Sync "kubernetes-dashboard/dashboard-metrics-scraper-5b494cc544" failed with pods "dashboard-metrics-scraper-5b494cc544-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	E0609 01:40:35.367928       1 replica_set.go:450] Sync "kubernetes-dashboard/kubernetes-dashboard-5d8978d65d" failed with pods "kubernetes-dashboard-5d8978d65d-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	E0609 01:40:35.372059       1 replica_set.go:450] Sync "kubernetes-dashboard/kubernetes-dashboard-5d8978d65d" failed with pods "kubernetes-dashboard-5d8978d65d-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0609 01:40:35.372481       1 event.go:209] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kubernetes-dashboard", Name:"kubernetes-dashboard-5d8978d65d", UID:"af80a0bd-c8c3-11eb-a78f-02427f02d9a2", APIVersion:"apps/v1", ResourceVersion:"440", FieldPath:""}): type: 'Warning' reason: 'FailedCreate' Error creating: pods "kubernetes-dashboard-5d8978d65d-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	E0609 01:40:35.441817       1 replica_set.go:450] Sync "kubernetes-dashboard/kubernetes-dashboard-5d8978d65d" failed with pods "kubernetes-dashboard-5d8978d65d-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	E0609 01:40:35.441964       1 replica_set.go:450] Sync "kubernetes-dashboard/dashboard-metrics-scraper-5b494cc544" failed with pods "dashboard-metrics-scraper-5b494cc544-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0609 01:40:35.442412       1 event.go:209] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kubernetes-dashboard", Name:"kubernetes-dashboard-5d8978d65d", UID:"af80a0bd-c8c3-11eb-a78f-02427f02d9a2", APIVersion:"apps/v1", ResourceVersion:"440", FieldPath:""}): type: 'Warning' reason: 'FailedCreate' Error creating: pods "kubernetes-dashboard-5d8978d65d-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0609 01:40:35.442440       1 event.go:209] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kubernetes-dashboard", Name:"dashboard-metrics-scraper-5b494cc544", UID:"af75069a-c8c3-11eb-a78f-02427f02d9a2", APIVersion:"apps/v1", ResourceVersion:"432", FieldPath:""}): type: 'Warning' reason: 'FailedCreate' Error creating: pods "dashboard-metrics-scraper-5b494cc544-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	E0609 01:40:35.464444       1 replica_set.go:450] Sync "kubernetes-dashboard/kubernetes-dashboard-5d8978d65d" failed with pods "kubernetes-dashboard-5d8978d65d-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0609 01:40:35.464486       1 event.go:209] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kubernetes-dashboard", Name:"kubernetes-dashboard-5d8978d65d", UID:"af80a0bd-c8c3-11eb-a78f-02427f02d9a2", APIVersion:"apps/v1", ResourceVersion:"440", FieldPath:""}): type: 'Warning' reason: 'FailedCreate' Error creating: pods "kubernetes-dashboard-5d8978d65d-" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount "kubernetes-dashboard" not found
	I0609 01:40:35.546527       1 event.go:209] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kubernetes-dashboard", Name:"dashboard-metrics-scraper-5b494cc544", UID:"af75069a-c8c3-11eb-a78f-02427f02d9a2", APIVersion:"apps/v1", ResourceVersion:"432", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: dashboard-metrics-scraper-5b494cc544-529qb
	I0609 01:40:35.546799       1 event.go:209] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kubernetes-dashboard", Name:"kubernetes-dashboard-5d8978d65d", UID:"af80a0bd-c8c3-11eb-a78f-02427f02d9a2", APIVersion:"apps/v1", ResourceVersion:"440", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: kubernetes-dashboard-5d8978d65d-5c7t7
	I0609 01:40:36.049812       1 event.go:209] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"kube-system", Name:"metrics-server-8546d8b77b", UID:"af420efe-c8c3-11eb-a78f-02427f02d9a2", APIVersion:"apps/v1", ResourceVersion:"402", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: metrics-server-8546d8b77b-lqx7b
	E0609 01:41:02.997582       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0609 01:41:05.550860       1 garbagecollector.go:644] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0609 01:41:33.249304       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0609 01:41:37.552663       1 garbagecollector.go:644] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0609 01:42:03.500854       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	
	* 
	* ==> kube-proxy [ef3565ebed50] <==
	* W0609 01:40:33.954499       1 server_others.go:295] Flag proxy-mode="" unknown, assuming iptables proxy
	I0609 01:40:33.964131       1 server_others.go:148] Using iptables Proxier.
	I0609 01:40:33.964802       1 server_others.go:178] Tearing down inactive rules.
	E0609 01:40:34.154995       1 proxier.go:583] Error removing iptables rules in ipvs proxier: error deleting chain "KUBE-MARK-MASQ": exit status 1: iptables: Too many links.
	I0609 01:40:35.290112       1 server.go:555] Version: v1.14.0
	I0609 01:40:35.341044       1 config.go:202] Starting service config controller
	I0609 01:40:35.341164       1 controller_utils.go:1027] Waiting for caches to sync for service config controller
	I0609 01:40:35.341748       1 config.go:102] Starting endpoints config controller
	I0609 01:40:35.343249       1 controller_utils.go:1027] Waiting for caches to sync for endpoints config controller
	I0609 01:40:35.441725       1 controller_utils.go:1034] Caches are synced for service config controller
	I0609 01:40:35.443748       1 controller_utils.go:1034] Caches are synced for endpoints config controller
	
	* 
	* ==> kube-scheduler [15294a1b99e5] <==
	* W0609 01:40:10.688361       1 authentication.go:55] Authentication is disabled
	I0609 01:40:10.688374       1 deprecated_insecure_serving.go:49] Serving healthz insecurely on [::]:10251
	I0609 01:40:10.688743       1 secure_serving.go:116] Serving securely on 127.0.0.1:10259
	E0609 01:40:12.981814       1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0609 01:40:12.981916       1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0609 01:40:12.982827       1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0609 01:40:13.050964       1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0609 01:40:13.062003       1 reflector.go:126] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:223: Failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0609 01:40:13.062138       1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0609 01:40:13.062510       1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0609 01:40:13.062930       1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0609 01:40:13.064487       1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0609 01:40:13.065331       1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0609 01:40:13.982943       1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0609 01:40:13.984017       1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0609 01:40:13.985045       1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0609 01:40:14.052710       1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0609 01:40:14.063171       1 reflector.go:126] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:223: Failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0609 01:40:14.063859       1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0609 01:40:14.065063       1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0609 01:40:14.066262       1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0609 01:40:14.067278       1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0609 01:40:14.068396       1 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	I0609 01:40:15.890053       1 controller_utils.go:1027] Waiting for caches to sync for scheduler controller
	I0609 01:40:15.990228       1 controller_utils.go:1034] Caches are synced for scheduler controller
	
	* 
	* ==> kubelet <==
	* -- Logs begin at Wed 2021-06-09 01:34:39 UTC, end at Wed 2021-06-09 01:42:06 UTC. --
	Jun 09 01:40:41 old-k8s-version-20210609012901-9941 kubelet[6233]: E0609 01:40:41.434450    6233 kuberuntime_image.go:51] Pull image "fake.domain/k8s.gcr.io/echoserver:1.4" failed: rpc error: code = Unknown desc = Error response from daemon: Get https://fake.domain/v2/: dial tcp: lookup fake.domain on 192.168.67.1:53: no such host
	Jun 09 01:40:41 old-k8s-version-20210609012901-9941 kubelet[6233]: E0609 01:40:41.434528    6233 kuberuntime_manager.go:780] container start failed: ErrImagePull: rpc error: code = Unknown desc = Error response from daemon: Get https://fake.domain/v2/: dial tcp: lookup fake.domain on 192.168.67.1:53: no such host
	Jun 09 01:40:41 old-k8s-version-20210609012901-9941 kubelet[6233]: E0609 01:40:41.434593    6233 pod_workers.go:190] Error syncing pod afea2287-c8c3-11eb-a78f-02427f02d9a2 ("metrics-server-8546d8b77b-lqx7b_kube-system(afea2287-c8c3-11eb-a78f-02427f02d9a2)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = Error response from daemon: Get https://fake.domain/v2/: dial tcp: lookup fake.domain on 192.168.67.1:53: no such host"
	Jun 09 01:40:41 old-k8s-version-20210609012901-9941 kubelet[6233]: E0609 01:40:41.702071    6233 pod_workers.go:190] Error syncing pod afea2287-c8c3-11eb-a78f-02427f02d9a2 ("metrics-server-8546d8b77b-lqx7b_kube-system(afea2287-c8c3-11eb-a78f-02427f02d9a2)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/k8s.gcr.io/echoserver:1.4\""
	Jun 09 01:40:43 old-k8s-version-20210609012901-9941 kubelet[6233]: E0609 01:40:43.724887    6233 pod_workers.go:190] Error syncing pod af9cb92a-c8c3-11eb-a78f-02427f02d9a2 ("dashboard-metrics-scraper-5b494cc544-529qb_kubernetes-dashboard(af9cb92a-c8c3-11eb-a78f-02427f02d9a2)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "Back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5b494cc544-529qb_kubernetes-dashboard(af9cb92a-c8c3-11eb-a78f-02427f02d9a2)"
	Jun 09 01:40:44 old-k8s-version-20210609012901-9941 kubelet[6233]: E0609 01:40:44.734847    6233 pod_workers.go:190] Error syncing pod af9cb92a-c8c3-11eb-a78f-02427f02d9a2 ("dashboard-metrics-scraper-5b494cc544-529qb_kubernetes-dashboard(af9cb92a-c8c3-11eb-a78f-02427f02d9a2)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "Back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5b494cc544-529qb_kubernetes-dashboard(af9cb92a-c8c3-11eb-a78f-02427f02d9a2)"
	Jun 09 01:40:49 old-k8s-version-20210609012901-9941 kubelet[6233]: E0609 01:40:49.538510    6233 pod_workers.go:190] Error syncing pod af9cb92a-c8c3-11eb-a78f-02427f02d9a2 ("dashboard-metrics-scraper-5b494cc544-529qb_kubernetes-dashboard(af9cb92a-c8c3-11eb-a78f-02427f02d9a2)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "Back-off 10s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5b494cc544-529qb_kubernetes-dashboard(af9cb92a-c8c3-11eb-a78f-02427f02d9a2)"
	Jun 09 01:40:57 old-k8s-version-20210609012901-9941 kubelet[6233]: E0609 01:40:57.042394    6233 remote_image.go:113] PullImage "fake.domain/k8s.gcr.io/echoserver:1.4" from image service failed: rpc error: code = Unknown desc = Error response from daemon: Get https://fake.domain/v2/: dial tcp: lookup fake.domain on 192.168.67.1:53: no such host
	Jun 09 01:40:57 old-k8s-version-20210609012901-9941 kubelet[6233]: E0609 01:40:57.042449    6233 kuberuntime_image.go:51] Pull image "fake.domain/k8s.gcr.io/echoserver:1.4" failed: rpc error: code = Unknown desc = Error response from daemon: Get https://fake.domain/v2/: dial tcp: lookup fake.domain on 192.168.67.1:53: no such host
	Jun 09 01:40:57 old-k8s-version-20210609012901-9941 kubelet[6233]: E0609 01:40:57.042530    6233 kuberuntime_manager.go:780] container start failed: ErrImagePull: rpc error: code = Unknown desc = Error response from daemon: Get https://fake.domain/v2/: dial tcp: lookup fake.domain on 192.168.67.1:53: no such host
	Jun 09 01:40:57 old-k8s-version-20210609012901-9941 kubelet[6233]: E0609 01:40:57.042566    6233 pod_workers.go:190] Error syncing pod afea2287-c8c3-11eb-a78f-02427f02d9a2 ("metrics-server-8546d8b77b-lqx7b_kube-system(afea2287-c8c3-11eb-a78f-02427f02d9a2)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = Error response from daemon: Get https://fake.domain/v2/: dial tcp: lookup fake.domain on 192.168.67.1:53: no such host"
	Jun 09 01:41:01 old-k8s-version-20210609012901-9941 kubelet[6233]: E0609 01:41:01.836699    6233 pod_workers.go:190] Error syncing pod af9cb92a-c8c3-11eb-a78f-02427f02d9a2 ("dashboard-metrics-scraper-5b494cc544-529qb_kubernetes-dashboard(af9cb92a-c8c3-11eb-a78f-02427f02d9a2)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "Back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5b494cc544-529qb_kubernetes-dashboard(af9cb92a-c8c3-11eb-a78f-02427f02d9a2)"
	Jun 09 01:41:09 old-k8s-version-20210609012901-9941 kubelet[6233]: E0609 01:41:09.538606    6233 pod_workers.go:190] Error syncing pod af9cb92a-c8c3-11eb-a78f-02427f02d9a2 ("dashboard-metrics-scraper-5b494cc544-529qb_kubernetes-dashboard(af9cb92a-c8c3-11eb-a78f-02427f02d9a2)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "Back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5b494cc544-529qb_kubernetes-dashboard(af9cb92a-c8c3-11eb-a78f-02427f02d9a2)"
	Jun 09 01:41:12 old-k8s-version-20210609012901-9941 kubelet[6233]: E0609 01:41:12.012609    6233 pod_workers.go:190] Error syncing pod afea2287-c8c3-11eb-a78f-02427f02d9a2 ("metrics-server-8546d8b77b-lqx7b_kube-system(afea2287-c8c3-11eb-a78f-02427f02d9a2)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/k8s.gcr.io/echoserver:1.4\""
	Jun 09 01:41:21 old-k8s-version-20210609012901-9941 kubelet[6233]: E0609 01:41:21.011631    6233 pod_workers.go:190] Error syncing pod af9cb92a-c8c3-11eb-a78f-02427f02d9a2 ("dashboard-metrics-scraper-5b494cc544-529qb_kubernetes-dashboard(af9cb92a-c8c3-11eb-a78f-02427f02d9a2)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "Back-off 20s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5b494cc544-529qb_kubernetes-dashboard(af9cb92a-c8c3-11eb-a78f-02427f02d9a2)"
	Jun 09 01:41:27 old-k8s-version-20210609012901-9941 kubelet[6233]: E0609 01:41:27.040969    6233 remote_image.go:113] PullImage "fake.domain/k8s.gcr.io/echoserver:1.4" from image service failed: rpc error: code = Unknown desc = Error response from daemon: Get https://fake.domain/v2/: dial tcp: lookup fake.domain on 192.168.67.1:53: no such host
	Jun 09 01:41:27 old-k8s-version-20210609012901-9941 kubelet[6233]: E0609 01:41:27.041003    6233 kuberuntime_image.go:51] Pull image "fake.domain/k8s.gcr.io/echoserver:1.4" failed: rpc error: code = Unknown desc = Error response from daemon: Get https://fake.domain/v2/: dial tcp: lookup fake.domain on 192.168.67.1:53: no such host
	Jun 09 01:41:27 old-k8s-version-20210609012901-9941 kubelet[6233]: E0609 01:41:27.041051    6233 kuberuntime_manager.go:780] container start failed: ErrImagePull: rpc error: code = Unknown desc = Error response from daemon: Get https://fake.domain/v2/: dial tcp: lookup fake.domain on 192.168.67.1:53: no such host
	Jun 09 01:41:27 old-k8s-version-20210609012901-9941 kubelet[6233]: E0609 01:41:27.041074    6233 pod_workers.go:190] Error syncing pod afea2287-c8c3-11eb-a78f-02427f02d9a2 ("metrics-server-8546d8b77b-lqx7b_kube-system(afea2287-c8c3-11eb-a78f-02427f02d9a2)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = Error response from daemon: Get https://fake.domain/v2/: dial tcp: lookup fake.domain on 192.168.67.1:53: no such host"
	Jun 09 01:41:35 old-k8s-version-20210609012901-9941 kubelet[6233]: E0609 01:41:35.034469    6233 pod_workers.go:190] Error syncing pod af9cb92a-c8c3-11eb-a78f-02427f02d9a2 ("dashboard-metrics-scraper-5b494cc544-529qb_kubernetes-dashboard(af9cb92a-c8c3-11eb-a78f-02427f02d9a2)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "Back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5b494cc544-529qb_kubernetes-dashboard(af9cb92a-c8c3-11eb-a78f-02427f02d9a2)"
	Jun 09 01:41:39 old-k8s-version-20210609012901-9941 kubelet[6233]: E0609 01:41:39.538621    6233 pod_workers.go:190] Error syncing pod af9cb92a-c8c3-11eb-a78f-02427f02d9a2 ("dashboard-metrics-scraper-5b494cc544-529qb_kubernetes-dashboard(af9cb92a-c8c3-11eb-a78f-02427f02d9a2)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "Back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5b494cc544-529qb_kubernetes-dashboard(af9cb92a-c8c3-11eb-a78f-02427f02d9a2)"
	Jun 09 01:41:40 old-k8s-version-20210609012901-9941 kubelet[6233]: E0609 01:41:40.012660    6233 pod_workers.go:190] Error syncing pod afea2287-c8c3-11eb-a78f-02427f02d9a2 ("metrics-server-8546d8b77b-lqx7b_kube-system(afea2287-c8c3-11eb-a78f-02427f02d9a2)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/k8s.gcr.io/echoserver:1.4\""
	Jun 09 01:41:52 old-k8s-version-20210609012901-9941 kubelet[6233]: E0609 01:41:52.011734    6233 pod_workers.go:190] Error syncing pod af9cb92a-c8c3-11eb-a78f-02427f02d9a2 ("dashboard-metrics-scraper-5b494cc544-529qb_kubernetes-dashboard(af9cb92a-c8c3-11eb-a78f-02427f02d9a2)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "Back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5b494cc544-529qb_kubernetes-dashboard(af9cb92a-c8c3-11eb-a78f-02427f02d9a2)"
	Jun 09 01:41:53 old-k8s-version-20210609012901-9941 kubelet[6233]: E0609 01:41:53.012733    6233 pod_workers.go:190] Error syncing pod afea2287-c8c3-11eb-a78f-02427f02d9a2 ("metrics-server-8546d8b77b-lqx7b_kube-system(afea2287-c8c3-11eb-a78f-02427f02d9a2)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/k8s.gcr.io/echoserver:1.4\""
	Jun 09 01:42:05 old-k8s-version-20210609012901-9941 kubelet[6233]: E0609 01:42:05.011713    6233 pod_workers.go:190] Error syncing pod af9cb92a-c8c3-11eb-a78f-02427f02d9a2 ("dashboard-metrics-scraper-5b494cc544-529qb_kubernetes-dashboard(af9cb92a-c8c3-11eb-a78f-02427f02d9a2)"), skipping: failed to "StartContainer" for "dashboard-metrics-scraper" with CrashLoopBackOff: "Back-off 40s restarting failed container=dashboard-metrics-scraper pod=dashboard-metrics-scraper-5b494cc544-529qb_kubernetes-dashboard(af9cb92a-c8c3-11eb-a78f-02427f02d9a2)"
	
	* 
	* ==> kubernetes-dashboard [9230420d066a] <==
	* 2021/06/09 01:40:37 Using namespace: kubernetes-dashboard
	2021/06/09 01:40:37 Using in-cluster config to connect to apiserver
	2021/06/09 01:40:37 Using secret token for csrf signing
	2021/06/09 01:40:37 Initializing csrf token from kubernetes-dashboard-csrf secret
	2021/06/09 01:40:37 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2021/06/09 01:40:37 Successful initial request to the apiserver, version: v1.14.0
	2021/06/09 01:40:37 Generating JWE encryption key
	2021/06/09 01:40:37 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2021/06/09 01:40:37 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2021/06/09 01:40:37 Initializing JWE encryption key from synchronized object
	2021/06/09 01:40:37 Creating in-cluster Sidecar client
	2021/06/09 01:40:37 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2021/06/09 01:40:37 Serving insecurely on HTTP port: 9090
	2021/06/09 01:41:07 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2021/06/09 01:41:37 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2021/06/09 01:40:37 Starting overwatch
	
	* 
	* ==> storage-provisioner [d27ec4783cae] <==
	* I0609 01:40:36.443365       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0609 01:40:36.452888       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0609 01:40:36.452950       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0609 01:40:36.459951       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0609 01:40:36.460148       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_old-k8s-version-20210609012901-9941_fc86cf60-f6d1-4320-9b0a-458b591f1e4d!
	I0609 01:40:36.461060       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"af273732-c8c3-11eb-a78f-02427f02d9a2", APIVersion:"v1", ResourceVersion:"494", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' old-k8s-version-20210609012901-9941_fc86cf60-f6d1-4320-9b0a-458b591f1e4d became leader
	I0609 01:40:36.560264       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_old-k8s-version-20210609012901-9941_fc86cf60-f6d1-4320-9b0a-458b591f1e4d!
	

                                                
                                                
-- /stdout --
helpers_test.go:250: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-20210609012901-9941 -n old-k8s-version-20210609012901-9941
helpers_test.go:257: (dbg) Run:  kubectl --context old-k8s-version-20210609012901-9941 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:263: non-running pods: metrics-server-8546d8b77b-lqx7b
helpers_test.go:265: ======> post-mortem[TestStartStop/group/old-k8s-version/serial/Pause]: describe non-running pods <======
helpers_test.go:268: (dbg) Run:  kubectl --context old-k8s-version-20210609012901-9941 describe pod metrics-server-8546d8b77b-lqx7b
helpers_test.go:268: (dbg) Non-zero exit: kubectl --context old-k8s-version-20210609012901-9941 describe pod metrics-server-8546d8b77b-lqx7b: exit status 1 (63.082266ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-8546d8b77b-lqx7b" not found

                                                
                                                
** /stderr **
helpers_test.go:270: kubectl --context old-k8s-version-20210609012901-9941 describe pod metrics-server-8546d8b77b-lqx7b: exit status 1
--- FAIL: TestStartStop/group/old-k8s-version/serial/Pause (7.55s)

                                                
                                    

Test pass (245/266)

Order passed test Duration
3 TestDownloadOnly/v1.14.0/json-events 11.61
4 TestDownloadOnly/v1.14.0/preload-exists 0
8 TestDownloadOnly/v1.14.0/LogsDuration 0.07
10 TestDownloadOnly/v1.20.7/json-events 7.18
11 TestDownloadOnly/v1.20.7/preload-exists 0
15 TestDownloadOnly/v1.20.7/LogsDuration 0.07
17 TestDownloadOnly/v1.22.0-alpha.2/json-events 5.64
18 TestDownloadOnly/v1.22.0-alpha.2/preload-exists 0
22 TestDownloadOnly/v1.22.0-alpha.2/LogsDuration 0.07
23 TestDownloadOnly/DeleteAll 0.37
24 TestDownloadOnly/DeleteAlwaysSucceeds 0.23
25 TestDownloadOnlyKic 3.36
26 TestOffline 153.01
29 TestAddons/parallel/Registry 14.59
30 TestAddons/parallel/Ingress 44.81
31 TestAddons/parallel/MetricsServer 5.59
32 TestAddons/parallel/HelmTiller 12.51
33 TestAddons/parallel/Olm 47.88
34 TestAddons/parallel/CSI 77.31
35 TestAddons/parallel/GCPAuth 34.9
36 TestCertOptions 37.93
37 TestDockerFlags 46.99
38 TestForceSystemdFlag 43.54
39 TestForceSystemdEnv 32.95
44 TestErrorSpam/start 59.06
45 TestErrorSpam/status 31
46 TestErrorSpam/pause 1.89
47 TestErrorSpam/unpause 0.52
48 TestErrorSpam/stop 11.01
51 TestFunctional/serial/CopySyncFile 0
52 TestFunctional/serial/StartWithProxy 121.89
53 TestFunctional/serial/AuditLog 0
54 TestFunctional/serial/SoftStart 4.87
55 TestFunctional/serial/KubeContext 0.04
56 TestFunctional/serial/KubectlGetPods 0.23
59 TestFunctional/serial/CacheCmd/cache/add_remote 3.31
60 TestFunctional/serial/CacheCmd/cache/add_local 1.65
61 TestFunctional/serial/CacheCmd/cache/delete_k8s.gcr.io/pause:3.3 0.06
62 TestFunctional/serial/CacheCmd/cache/list 0.06
63 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.29
64 TestFunctional/serial/CacheCmd/cache/cache_reload 1.87
65 TestFunctional/serial/CacheCmd/cache/delete 0.12
66 TestFunctional/serial/MinikubeKubectlCmd 0.12
67 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.12
68 TestFunctional/serial/ExtraConfig 99.82
69 TestFunctional/serial/ComponentHealth 0.07
71 TestFunctional/parallel/ConfigCmd 0.47
72 TestFunctional/parallel/DashboardCmd 3.52
73 TestFunctional/parallel/DryRun 0.62
74 TestFunctional/parallel/StatusCmd 1
75 TestFunctional/parallel/LogsCmd 2.8
76 TestFunctional/parallel/LogsFileCmd 1.67
77 TestFunctional/parallel/MountCmd 10.79
79 TestFunctional/parallel/ServiceCmd 14.29
80 TestFunctional/parallel/AddonsCmd 0.2
81 TestFunctional/parallel/PersistentVolumeClaim 31.38
83 TestFunctional/parallel/SSHCmd 0.62
84 TestFunctional/parallel/CpCmd 0.63
85 TestFunctional/parallel/MySQL 20.46
86 TestFunctional/parallel/FileSync 0.28
87 TestFunctional/parallel/CertSync 0.86
89 TestFunctional/parallel/DockerEnv 1.31
91 TestFunctional/parallel/NodeLabels 0.06
92 TestFunctional/parallel/LoadImage 2.44
93 TestFunctional/parallel/RemoveImage 2.73
94 TestFunctional/parallel/BuildImage 4.11
95 TestFunctional/parallel/ListImages 0.33
96 TestFunctional/parallel/NonActiveRuntimeDisabled 0.35
97 TestFunctional/parallel/ProfileCmd/profile_not_create 0.49
98 TestFunctional/parallel/ProfileCmd/profile_list 0.42
99 TestFunctional/parallel/ProfileCmd/profile_json_output 0.38
101 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0
103 TestFunctional/parallel/UpdateContextCmd/no_changes 0.11
104 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.11
105 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.11
106 TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP 0.1
107 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0
111 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.11
112 TestFunctional/delete_busybox_image 0.08
113 TestFunctional/delete_my-image_image 0.04
114 TestFunctional/delete_minikube_cached_images 0.05
118 TestJSONOutput/start/Audit 0
120 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
121 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
123 TestJSONOutput/pause/Audit 0
125 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
126 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
128 TestJSONOutput/unpause/Audit 0
130 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
131 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
133 TestJSONOutput/stop/Audit 0
135 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
136 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
137 TestErrorJSONOutput 0.34
139 TestKicCustomNetwork/create_custom_network 26.74
140 TestKicCustomNetwork/use_default_bridge_network 26.14
141 TestKicExistingNetwork 26.71
142 TestMainNoArgs 0.06
145 TestMultiNode/serial/FreshStart2Nodes 140.94
146 TestMultiNode/serial/DeployApp2Nodes 5.93
147 TestMultiNode/serial/PingHostFrom2Pods 1.11
148 TestMultiNode/serial/AddNode 25.73
149 TestMultiNode/serial/ProfileList 0.3
150 TestMultiNode/serial/CopyFile 2.35
151 TestMultiNode/serial/StopNode 2.56
152 TestMultiNode/serial/StartAfterStop 24.89
153 TestMultiNode/serial/DeleteNode 5.39
154 TestMultiNode/serial/StopMultiNode 22.11
155 TestMultiNode/serial/RestartMultiNode 130.54
156 TestMultiNode/serial/ValidateNameConflict 26.81
162 TestDebPackageInstall/install_amd64_debian:sid/minikube 0
163 TestDebPackageInstall/install_amd64_debian:sid/kvm2-driver 11.28
165 TestDebPackageInstall/install_amd64_debian:latest/minikube 0
166 TestDebPackageInstall/install_amd64_debian:latest/kvm2-driver 10.23
168 TestDebPackageInstall/install_amd64_debian:10/minikube 0
169 TestDebPackageInstall/install_amd64_debian:10/kvm2-driver 9.7
171 TestDebPackageInstall/install_amd64_debian:9/minikube 0
172 TestDebPackageInstall/install_amd64_debian:9/kvm2-driver 8.53
174 TestDebPackageInstall/install_amd64_ubuntu:latest/minikube 0
175 TestDebPackageInstall/install_amd64_ubuntu:latest/kvm2-driver 16.8
177 TestDebPackageInstall/install_amd64_ubuntu:20.10/minikube 0
178 TestDebPackageInstall/install_amd64_ubuntu:20.10/kvm2-driver 16.33
180 TestDebPackageInstall/install_amd64_ubuntu:20.04/minikube 0
181 TestDebPackageInstall/install_amd64_ubuntu:20.04/kvm2-driver 16.37
183 TestDebPackageInstall/install_amd64_ubuntu:18.04/minikube 0
184 TestDebPackageInstall/install_amd64_ubuntu:18.04/kvm2-driver 15.34
185 TestPreload 115.25
187 TestScheduledStopUnix 52.63
188 TestSkaffold 70.2
190 TestInsufficientStorage 9.11
191 TestRunningBinaryUpgrade 94.76
193 TestKubernetesUpgrade 128.38
194 TestMissingContainerUpgrade 130.15
196 TestPause/serial/Start 165.75
204 TestPause/serial/SecondStartNoReconfiguration 5.5
205 TestPause/serial/Pause 0.53
206 TestPause/serial/VerifyStatus 0.34
207 TestPause/serial/Unpause 0.57
208 TestPause/serial/PauseAgain 0.78
209 TestPause/serial/DeletePaused 2.93
210 TestPause/serial/VerifyDeletedResources 0.89
222 TestStoppedBinaryUpgrade/MinikubeLogs 1.47
224 TestStartStop/group/old-k8s-version/serial/FirstStart 315.58
226 TestStartStop/group/no-preload/serial/FirstStart 90.49
228 TestStartStop/group/embed-certs/serial/FirstStart 129.07
230 TestStartStop/group/default-k8s-different-port/serial/FirstStart 128.16
231 TestStartStop/group/no-preload/serial/DeployApp 10.51
232 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 0.68
233 TestStartStop/group/no-preload/serial/Stop 11.16
234 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.21
235 TestStartStop/group/no-preload/serial/SecondStart 343.7
236 TestStartStop/group/embed-certs/serial/DeployApp 9.51
237 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 0.69
238 TestStartStop/group/embed-certs/serial/Stop 11.03
239 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.22
240 TestStartStop/group/embed-certs/serial/SecondStart 381.41
241 TestStartStop/group/default-k8s-different-port/serial/DeployApp 11.56
242 TestStartStop/group/default-k8s-different-port/serial/EnableAddonWhileActive 0.66
243 TestStartStop/group/default-k8s-different-port/serial/Stop 11.12
244 TestStartStop/group/default-k8s-different-port/serial/EnableAddonAfterStop 0.2
245 TestStartStop/group/default-k8s-different-port/serial/SecondStart 492.62
246 TestStartStop/group/old-k8s-version/serial/DeployApp 9.55
247 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 0.7
248 TestStartStop/group/old-k8s-version/serial/Stop 11.11
249 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.21
250 TestStartStop/group/old-k8s-version/serial/SecondStart 430.23
251 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 5.01
252 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 5.09
253 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.3
254 TestStartStop/group/no-preload/serial/Pause 2.76
256 TestStartStop/group/newest-cni/serial/FirstStart 41.65
257 TestStartStop/group/newest-cni/serial/DeployApp 0
258 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 0.64
259 TestStartStop/group/newest-cni/serial/Stop 11.16
260 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.21
261 TestStartStop/group/newest-cni/serial/SecondStart 19.01
262 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 5.02
263 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 5.11
264 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.36
265 TestStartStop/group/embed-certs/serial/Pause 2.94
266 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
267 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
268 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.34
269 TestStartStop/group/newest-cni/serial/Pause 3.19
270 TestNetworkPlugins/group/auto/Start 327.56
271 TestNetworkPlugins/group/false/Start 97.33
272 TestNetworkPlugins/group/false/KubeletFlags 0.29
273 TestNetworkPlugins/group/false/NetCatPod 9.24
274 TestNetworkPlugins/group/false/DNS 0.17
275 TestNetworkPlugins/group/false/Localhost 0.16
276 TestNetworkPlugins/group/false/HairPin 5.17
277 TestNetworkPlugins/group/cilium/Start 129.2
278 TestStartStop/group/default-k8s-different-port/serial/UserAppExistsAfterStop 5.02
279 TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop 5.09
280 TestStartStop/group/default-k8s-different-port/serial/VerifyKubernetesImages 0.3
281 TestStartStop/group/default-k8s-different-port/serial/Pause 2.72
282 TestNetworkPlugins/group/calico/Start 126.1
283 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 5.01
284 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 5.09
285 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.31
287 TestNetworkPlugins/group/custom-weave/Start 122.21
288 TestNetworkPlugins/group/cilium/ControllerPod 5.02
289 TestNetworkPlugins/group/cilium/KubeletFlags 0.31
290 TestNetworkPlugins/group/cilium/NetCatPod 10.41
291 TestNetworkPlugins/group/cilium/DNS 0.2
292 TestNetworkPlugins/group/cilium/Localhost 0.16
293 TestNetworkPlugins/group/cilium/HairPin 0.17
294 TestNetworkPlugins/group/enable-default-cni/Start 146.03
295 TestNetworkPlugins/group/calico/ControllerPod 5.02
296 TestNetworkPlugins/group/calico/KubeletFlags 0.34
297 TestNetworkPlugins/group/calico/NetCatPod 14.13
298 TestNetworkPlugins/group/calico/DNS 0.19
299 TestNetworkPlugins/group/calico/Localhost 0.18
300 TestNetworkPlugins/group/calico/HairPin 0.19
301 TestNetworkPlugins/group/kindnet/Start 121.29
302 TestNetworkPlugins/group/auto/KubeletFlags 0.29
303 TestNetworkPlugins/group/auto/NetCatPod 14.43
304 TestNetworkPlugins/group/auto/DNS 0.18
305 TestNetworkPlugins/group/auto/Localhost 0.15
306 TestNetworkPlugins/group/auto/HairPin 5.14
307 TestNetworkPlugins/group/bridge/Start 100.99
308 TestNetworkPlugins/group/custom-weave/KubeletFlags 0.32
309 TestNetworkPlugins/group/custom-weave/NetCatPod 10.42
310 TestNetworkPlugins/group/kubenet/Start 106.19
311 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.3
312 TestNetworkPlugins/group/enable-default-cni/NetCatPod 12.27
313 TestNetworkPlugins/group/kindnet/ControllerPod 5.59
314 TestNetworkPlugins/group/kindnet/KubeletFlags 0.32
315 TestNetworkPlugins/group/kindnet/NetCatPod 9.26
316 TestNetworkPlugins/group/enable-default-cni/DNS 0.2
317 TestNetworkPlugins/group/enable-default-cni/Localhost 0.17
318 TestNetworkPlugins/group/enable-default-cni/HairPin 0.17
319 TestNetworkPlugins/group/kindnet/DNS 0.15
320 TestNetworkPlugins/group/kindnet/Localhost 0.15
321 TestNetworkPlugins/group/kindnet/HairPin 0.16
322 TestNetworkPlugins/group/bridge/KubeletFlags 0.29
323 TestNetworkPlugins/group/bridge/NetCatPod 8.29
324 TestNetworkPlugins/group/bridge/DNS 0.16
325 TestNetworkPlugins/group/bridge/Localhost 0.15
326 TestNetworkPlugins/group/bridge/HairPin 0.15
327 TestNetworkPlugins/group/kubenet/KubeletFlags 0.28
328 TestNetworkPlugins/group/kubenet/NetCatPod 9.23
329 TestNetworkPlugins/group/kubenet/DNS 0.15
330 TestNetworkPlugins/group/kubenet/Localhost 0.15
331 TestNetworkPlugins/group/kubenet/HairPin 0.14
x
+
TestDownloadOnly/v1.14.0/json-events (11.61s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.14.0/json-events
aaa_download_only_test.go:69: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-20210609005708-9941 --force --alsologtostderr --kubernetes-version=v1.14.0 --container-runtime=docker --driver=docker  --container-runtime=docker
aaa_download_only_test.go:69: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-20210609005708-9941 --force --alsologtostderr --kubernetes-version=v1.14.0 --container-runtime=docker --driver=docker  --container-runtime=docker: (11.612620441s)
--- PASS: TestDownloadOnly/v1.14.0/json-events (11.61s)

                                                
                                    
x
+
TestDownloadOnly/v1.14.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.14.0/preload-exists
--- PASS: TestDownloadOnly/v1.14.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.14.0/LogsDuration (0.07s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.14.0/LogsDuration
aaa_download_only_test.go:171: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-20210609005708-9941
aaa_download_only_test.go:171: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-20210609005708-9941: exit status 85 (70.577052ms)

                                                
                                                
-- stdout --
	* 
	* ==> Audit <==
	* |---------|------|---------|------|---------|------------|----------|
	| Command | Args | Profile | User | Version | Start Time | End Time |
	|---------|------|---------|------|---------|------------|----------|
	|---------|------|---------|------|---------|------------|----------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2021/06/09 00:57:08
	Running on machine: debian-jenkins-agent-1
	Binary: Built with gc go1.16.4 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0609 00:57:08.474639    9954 out.go:291] Setting OutFile to fd 1 ...
	I0609 00:57:08.474708    9954 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0609 00:57:08.474712    9954 out.go:304] Setting ErrFile to fd 2...
	I0609 00:57:08.474716    9954 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0609 00:57:08.474805    9954 root.go:316] Updating PATH: /home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/bin
	W0609 00:57:08.474898    9954 root.go:291] Error reading config file at /home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/config/config.json: open /home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/config/config.json: no such file or directory
	I0609 00:57:08.475104    9954 out.go:298] Setting JSON to true
	I0609 00:57:08.509839    9954 start.go:111] hostinfo: {"hostname":"debian-jenkins-agent-1","uptime":2391,"bootTime":1623197837,"procs":132,"os":"linux","platform":"debian","platformFamily":"debian","platformVersion":"9.13","kernelVersion":"4.9.0-15-amd64","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"c29e0b88-ef83-6765-d2fa-208fdce1af32"}
	I0609 00:57:08.509937    9954 start.go:121] virtualization: kvm guest
	I0609 00:57:08.512702    9954 notify.go:169] Checking for updates...
	I0609 00:57:08.514504    9954 driver.go:335] Setting default libvirt URI to qemu:///system
	I0609 00:57:08.559380    9954 docker.go:132] docker version: linux-19.03.15
	I0609 00:57:08.559463    9954 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	I0609 00:57:08.874740    9954 info.go:261] docker info: {ID:LQL6:IQEY:TAE3:NG47:ROZQ:WA5O:XM2B:XDCN:3VXZ:7JF3:4DHA:WN5N Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:132 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:false KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:23 OomKillDisable:true NGoroutines:34 SystemTime:2021-06-09 00:57:08.590667464 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:4.9.0-15-amd64 OperatingSystem:Debian GNU/Linux 9 (stretch) OSType:linux Architecture:x86_64 IndexServerAddr
ess:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33742209024 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:debian-jenkins-agent-1 Labels:[] ExperimentalBuild:false ServerVersion:19.03.15 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:269548fa27e0089a8b8278fc4fc781d7f65a939b Expected:269548fa27e0089a8b8278fc4fc781d7f65a939b} RuncCommit:{ID:ff819c7e9184c13b7c2607fe6c30ae19403a7aff Expected:ff819c7e9184c13b7c2607fe6c30ae19403a7aff} InitCommit:{ID:fec3683 Expected:fec3683} SecurityOptions:[name=seccomp,profile=default] ProductLicense: Warnin
gs:[WARNING: No swap limit support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[] Warnings:<nil>}}
	I0609 00:57:08.874831    9954 docker.go:244] overlay module found
	I0609 00:57:08.876860    9954 start.go:279] selected driver: docker
	I0609 00:57:08.876872    9954 start.go:752] validating driver "docker" against <nil>
	I0609 00:57:08.877303    9954 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	I0609 00:57:08.959684    9954 info.go:261] docker info: {ID:LQL6:IQEY:TAE3:NG47:ROZQ:WA5O:XM2B:XDCN:3VXZ:7JF3:4DHA:WN5N Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:132 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:false KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:23 OomKillDisable:true NGoroutines:34 SystemTime:2021-06-09 00:57:08.909992681 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:4.9.0-15-amd64 OperatingSystem:Debian GNU/Linux 9 (stretch) OSType:linux Architecture:x86_64 IndexServerAddr
ess:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33742209024 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:debian-jenkins-agent-1 Labels:[] ExperimentalBuild:false ServerVersion:19.03.15 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:269548fa27e0089a8b8278fc4fc781d7f65a939b Expected:269548fa27e0089a8b8278fc4fc781d7f65a939b} RuncCommit:{ID:ff819c7e9184c13b7c2607fe6c30ae19403a7aff Expected:ff819c7e9184c13b7c2607fe6c30ae19403a7aff} InitCommit:{ID:fec3683 Expected:fec3683} SecurityOptions:[name=seccomp,profile=default] ProductLicense: Warnin
gs:[WARNING: No swap limit support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[] Warnings:<nil>}}
	I0609 00:57:08.959764    9954 start_flags.go:259] no existing cluster config was found, will generate one from the flags 
	I0609 00:57:08.960252    9954 start_flags.go:311] Using suggested 8000MB memory alloc based on sys=32179MB, container=32179MB
	I0609 00:57:08.960331    9954 start_flags.go:638] Wait components to verify : map[apiserver:true system_pods:true]
	I0609 00:57:08.960349    9954 cni.go:93] Creating CNI manager for ""
	I0609 00:57:08.960359    9954 cni.go:167] CNI unnecessary in this configuration, recommending no CNI
	I0609 00:57:08.960364    9954 start_flags.go:273] config:
	{Name:download-only-20210609005708-9941 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.23@sha256:baf6d94b2050bcbecd98994e265cf965a4f4768978620ccf5227a6dcb75ade45 Memory:8000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.14.0 ClusterName:download-only-20210609005708-9941 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker C
RISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false}
	I0609 00:57:08.962439    9954 cache.go:115] Beginning downloading kic base image for docker with docker
	I0609 00:57:08.963782    9954 preload.go:110] Checking if preload exists for k8s version v1.14.0 and runtime docker
	I0609 00:57:08.963816    9954 cache.go:134] Downloading gcr.io/k8s-minikube/kicbase:v0.0.23@sha256:baf6d94b2050bcbecd98994e265cf965a4f4768978620ccf5227a6dcb75ade45 to local cache
	I0609 00:57:08.964059    9954 image.go:58] Checking for gcr.io/k8s-minikube/kicbase:v0.0.23@sha256:baf6d94b2050bcbecd98994e265cf965a4f4768978620ccf5227a6dcb75ade45 in local cache directory
	I0609 00:57:08.964130    9954 image.go:118] Writing gcr.io/k8s-minikube/kicbase:v0.0.23@sha256:baf6d94b2050bcbecd98994e265cf965a4f4768978620ccf5227a6dcb75ade45 to local cache
	I0609 00:57:09.020622    9954 preload.go:145] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/preloaded-images-k8s-v11-v1.14.0-docker-overlay2-amd64.tar.lz4
	I0609 00:57:09.020647    9954 cache.go:54] Caching tarball of preloaded images
	I0609 00:57:09.020834    9954 preload.go:110] Checking if preload exists for k8s version v1.14.0 and runtime docker
	I0609 00:57:09.022827    9954 preload.go:230] getting checksum for preloaded-images-k8s-v11-v1.14.0-docker-overlay2-amd64.tar.lz4 ...
	I0609 00:57:09.079456    9954 download.go:86] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/preloaded-images-k8s-v11-v1.14.0-docker-overlay2-amd64.tar.lz4?checksum=md5:cdcafd56ec108ba69c9fa94a2cd82e35 -> /home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v11-v1.14.0-docker-overlay2-amd64.tar.lz4
	I0609 00:57:12.324980    9954 preload.go:240] saving checksum for preloaded-images-k8s-v11-v1.14.0-docker-overlay2-amd64.tar.lz4 ...
	I0609 00:57:12.325058    9954 preload.go:247] verifying checksumm of /home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v11-v1.14.0-docker-overlay2-amd64.tar.lz4 ...
	I0609 00:57:13.320603    9954 cache.go:57] Finished verifying existence of preloaded tar for  v1.14.0 on docker
	I0609 00:57:13.320924    9954 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/profiles/download-only-20210609005708-9941/config.json ...
	I0609 00:57:13.320954    9954 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/profiles/download-only-20210609005708-9941/config.json: {Name:mka37bc0dad0a091f1e3a449d8a3bab6bcdb4308 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0609 00:57:13.321094    9954 preload.go:110] Checking if preload exists for k8s version v1.14.0 and runtime docker
	I0609 00:57:13.321229    9954 download.go:86] Downloading: https://storage.googleapis.com/kubernetes-release/release/v1.14.0/bin/linux/amd64/kubectl?checksum=file:https://storage.googleapis.com/kubernetes-release/release/v1.14.0/bin/linux/amd64/kubectl.sha1 -> /home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/cache/linux/v1.14.0/kubectl
	
	* 
	* The control plane node "" does not exist.
	  To start a cluster, run: "minikube start -p download-only-20210609005708-9941"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:172: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.14.0/LogsDuration (0.07s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.7/json-events (7.18s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.7/json-events
aaa_download_only_test.go:69: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-20210609005708-9941 --force --alsologtostderr --kubernetes-version=v1.20.7 --container-runtime=docker --driver=docker  --container-runtime=docker
aaa_download_only_test.go:69: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-20210609005708-9941 --force --alsologtostderr --kubernetes-version=v1.20.7 --container-runtime=docker --driver=docker  --container-runtime=docker: (7.17843918s)
--- PASS: TestDownloadOnly/v1.20.7/json-events (7.18s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.7/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.7/preload-exists
--- PASS: TestDownloadOnly/v1.20.7/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.7/LogsDuration (0.07s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.7/LogsDuration
aaa_download_only_test.go:171: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-20210609005708-9941
aaa_download_only_test.go:171: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-20210609005708-9941: exit status 85 (71.933043ms)

                                                
                                                
-- stdout --
	* 
	* ==> Audit <==
	* |---------|------|---------|------|---------|------------|----------|
	| Command | Args | Profile | User | Version | Start Time | End Time |
	|---------|------|---------|------|---------|------------|----------|
	|---------|------|---------|------|---------|------------|----------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2021/06/09 00:57:20
	Running on machine: debian-jenkins-agent-1
	Binary: Built with gc go1.16.4 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0609 00:57:20.161145   10083 out.go:291] Setting OutFile to fd 1 ...
	I0609 00:57:20.161220   10083 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0609 00:57:20.161224   10083 out.go:304] Setting ErrFile to fd 2...
	I0609 00:57:20.161226   10083 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0609 00:57:20.161326   10083 root.go:316] Updating PATH: /home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/bin
	W0609 00:57:20.161426   10083 root.go:291] Error reading config file at /home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/config/config.json: open /home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/config/config.json: no such file or directory
	I0609 00:57:20.161519   10083 out.go:298] Setting JSON to true
	I0609 00:57:20.195513   10083 start.go:111] hostinfo: {"hostname":"debian-jenkins-agent-1","uptime":2403,"bootTime":1623197837,"procs":132,"os":"linux","platform":"debian","platformFamily":"debian","platformVersion":"9.13","kernelVersion":"4.9.0-15-amd64","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"c29e0b88-ef83-6765-d2fa-208fdce1af32"}
	I0609 00:57:20.195625   10083 start.go:121] virtualization: kvm guest
	I0609 00:57:20.198241   10083 notify.go:169] Checking for updates...
	W0609 00:57:20.200518   10083 start.go:660] api.Load failed for download-only-20210609005708-9941: filestore "download-only-20210609005708-9941": Docker machine "download-only-20210609005708-9941" does not exist. Use "docker-machine ls" to list machines. Use "docker-machine create" to add a new one.
	I0609 00:57:20.200557   10083 driver.go:335] Setting default libvirt URI to qemu:///system
	W0609 00:57:20.200589   10083 start.go:660] api.Load failed for download-only-20210609005708-9941: filestore "download-only-20210609005708-9941": Docker machine "download-only-20210609005708-9941" does not exist. Use "docker-machine ls" to list machines. Use "docker-machine create" to add a new one.
	I0609 00:57:20.243966   10083 docker.go:132] docker version: linux-19.03.15
	I0609 00:57:20.244056   10083 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	I0609 00:57:20.316735   10083 info.go:261] docker info: {ID:LQL6:IQEY:TAE3:NG47:ROZQ:WA5O:XM2B:XDCN:3VXZ:7JF3:4DHA:WN5N Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:132 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:false KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:23 OomKillDisable:true NGoroutines:34 SystemTime:2021-06-09 00:57:20.275913435 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:4.9.0-15-amd64 OperatingSystem:Debian GNU/Linux 9 (stretch) OSType:linux Architecture:x86_64 IndexServerAddr
ess:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33742209024 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:debian-jenkins-agent-1 Labels:[] ExperimentalBuild:false ServerVersion:19.03.15 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:269548fa27e0089a8b8278fc4fc781d7f65a939b Expected:269548fa27e0089a8b8278fc4fc781d7f65a939b} RuncCommit:{ID:ff819c7e9184c13b7c2607fe6c30ae19403a7aff Expected:ff819c7e9184c13b7c2607fe6c30ae19403a7aff} InitCommit:{ID:fec3683 Expected:fec3683} SecurityOptions:[name=seccomp,profile=default] ProductLicense: Warnin
gs:[WARNING: No swap limit support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[] Warnings:<nil>}}
	I0609 00:57:20.316825   10083 docker.go:244] overlay module found
	I0609 00:57:20.318906   10083 start.go:279] selected driver: docker
	I0609 00:57:20.318919   10083 start.go:752] validating driver "docker" against &{Name:download-only-20210609005708-9941 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.23@sha256:baf6d94b2050bcbecd98994e265cf965a4f4768978620ccf5227a6dcb75ade45 Memory:8000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.14.0 ClusterName:download-only-20210609005708-9941 Namespace:default APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.14.0 ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false}
	I0609 00:57:20.319389   10083 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	I0609 00:57:20.398467   10083 info.go:261] docker info: {ID:LQL6:IQEY:TAE3:NG47:ROZQ:WA5O:XM2B:XDCN:3VXZ:7JF3:4DHA:WN5N Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:132 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:false KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:23 OomKillDisable:true NGoroutines:34 SystemTime:2021-06-09 00:57:20.351689644 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:4.9.0-15-amd64 OperatingSystem:Debian GNU/Linux 9 (stretch) OSType:linux Architecture:x86_64 IndexServerAddr
ess:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33742209024 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:debian-jenkins-agent-1 Labels:[] ExperimentalBuild:false ServerVersion:19.03.15 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:269548fa27e0089a8b8278fc4fc781d7f65a939b Expected:269548fa27e0089a8b8278fc4fc781d7f65a939b} RuncCommit:{ID:ff819c7e9184c13b7c2607fe6c30ae19403a7aff Expected:ff819c7e9184c13b7c2607fe6c30ae19403a7aff} InitCommit:{ID:fec3683 Expected:fec3683} SecurityOptions:[name=seccomp,profile=default] ProductLicense: Warnin
gs:[WARNING: No swap limit support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[] Warnings:<nil>}}
	I0609 00:57:20.399016   10083 cni.go:93] Creating CNI manager for ""
	I0609 00:57:20.399034   10083 cni.go:167] CNI unnecessary in this configuration, recommending no CNI
	I0609 00:57:20.399041   10083 start_flags.go:273] config:
	{Name:download-only-20210609005708-9941 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.23@sha256:baf6d94b2050bcbecd98994e265cf965a4f4768978620ccf5227a6dcb75ade45 Memory:8000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.7 ClusterName:download-only-20210609005708-9941 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker C
RISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.14.0 ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false}
	I0609 00:57:20.401053   10083 cache.go:115] Beginning downloading kic base image for docker with docker
	I0609 00:57:20.402533   10083 preload.go:110] Checking if preload exists for k8s version v1.20.7 and runtime docker
	I0609 00:57:20.402637   10083 cache.go:134] Downloading gcr.io/k8s-minikube/kicbase:v0.0.23@sha256:baf6d94b2050bcbecd98994e265cf965a4f4768978620ccf5227a6dcb75ade45 to local cache
	I0609 00:57:20.402835   10083 image.go:58] Checking for gcr.io/k8s-minikube/kicbase:v0.0.23@sha256:baf6d94b2050bcbecd98994e265cf965a4f4768978620ccf5227a6dcb75ade45 in local cache directory
	I0609 00:57:20.402854   10083 image.go:61] Found gcr.io/k8s-minikube/kicbase:v0.0.23@sha256:baf6d94b2050bcbecd98994e265cf965a4f4768978620ccf5227a6dcb75ade45 in local cache directory, skipping pull
	I0609 00:57:20.402858   10083 image.go:102] gcr.io/k8s-minikube/kicbase:v0.0.23@sha256:baf6d94b2050bcbecd98994e265cf965a4f4768978620ccf5227a6dcb75ade45 exists in cache, skipping pull
	I0609 00:57:20.402872   10083 cache.go:137] successfully saved gcr.io/k8s-minikube/kicbase:v0.0.23@sha256:baf6d94b2050bcbecd98994e265cf965a4f4768978620ccf5227a6dcb75ade45 as a tarball
	I0609 00:57:20.460771   10083 preload.go:145] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/preloaded-images-k8s-v11-v1.20.7-docker-overlay2-amd64.tar.lz4
	I0609 00:57:20.460790   10083 cache.go:54] Caching tarball of preloaded images
	I0609 00:57:20.461021   10083 preload.go:110] Checking if preload exists for k8s version v1.20.7 and runtime docker
	I0609 00:57:20.463183   10083 preload.go:230] getting checksum for preloaded-images-k8s-v11-v1.20.7-docker-overlay2-amd64.tar.lz4 ...
	I0609 00:57:20.516218   10083 download.go:86] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/preloaded-images-k8s-v11-v1.20.7-docker-overlay2-amd64.tar.lz4?checksum=md5:f41702d59ddd4fa1749fa672343212b9 -> /home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v11-v1.20.7-docker-overlay2-amd64.tar.lz4
	
	* 
	* The control plane node "" does not exist.
	  To start a cluster, run: "minikube start -p download-only-20210609005708-9941"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:172: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.20.7/LogsDuration (0.07s)

                                                
                                    
x
+
TestDownloadOnly/v1.22.0-alpha.2/json-events (5.64s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.22.0-alpha.2/json-events
aaa_download_only_test.go:69: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-20210609005708-9941 --force --alsologtostderr --kubernetes-version=v1.22.0-alpha.2 --container-runtime=docker --driver=docker  --container-runtime=docker
aaa_download_only_test.go:69: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-20210609005708-9941 --force --alsologtostderr --kubernetes-version=v1.22.0-alpha.2 --container-runtime=docker --driver=docker  --container-runtime=docker: (5.63860016s)
--- PASS: TestDownloadOnly/v1.22.0-alpha.2/json-events (5.64s)

                                                
                                    
x
+
TestDownloadOnly/v1.22.0-alpha.2/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.22.0-alpha.2/preload-exists
--- PASS: TestDownloadOnly/v1.22.0-alpha.2/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.22.0-alpha.2/LogsDuration (0.07s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.22.0-alpha.2/LogsDuration
aaa_download_only_test.go:171: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-20210609005708-9941
aaa_download_only_test.go:171: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-20210609005708-9941: exit status 85 (70.838282ms)

                                                
                                                
-- stdout --
	* 
	* ==> Audit <==
	* |---------|------|---------|------|---------|------------|----------|
	| Command | Args | Profile | User | Version | Start Time | End Time |
	|---------|------|---------|------|---------|------------|----------|
	|---------|------|---------|------|---------|------------|----------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2021/06/09 00:57:27
	Running on machine: debian-jenkins-agent-1
	Binary: Built with gc go1.16.4 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0609 00:57:27.411491   10214 out.go:291] Setting OutFile to fd 1 ...
	I0609 00:57:27.411632   10214 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0609 00:57:27.411640   10214 out.go:304] Setting ErrFile to fd 2...
	I0609 00:57:27.411643   10214 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0609 00:57:27.411751   10214 root.go:316] Updating PATH: /home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/bin
	W0609 00:57:27.411847   10214 root.go:291] Error reading config file at /home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/config/config.json: open /home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/config/config.json: no such file or directory
	I0609 00:57:27.411934   10214 out.go:298] Setting JSON to true
	I0609 00:57:27.448799   10214 start.go:111] hostinfo: {"hostname":"debian-jenkins-agent-1","uptime":2410,"bootTime":1623197837,"procs":132,"os":"linux","platform":"debian","platformFamily":"debian","platformVersion":"9.13","kernelVersion":"4.9.0-15-amd64","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"c29e0b88-ef83-6765-d2fa-208fdce1af32"}
	I0609 00:57:27.448875   10214 start.go:121] virtualization: kvm guest
	I0609 00:57:27.451363   10214 notify.go:169] Checking for updates...
	W0609 00:57:27.453530   10214 start.go:660] api.Load failed for download-only-20210609005708-9941: filestore "download-only-20210609005708-9941": Docker machine "download-only-20210609005708-9941" does not exist. Use "docker-machine ls" to list machines. Use "docker-machine create" to add a new one.
	I0609 00:57:27.453568   10214 driver.go:335] Setting default libvirt URI to qemu:///system
	W0609 00:57:27.453605   10214 start.go:660] api.Load failed for download-only-20210609005708-9941: filestore "download-only-20210609005708-9941": Docker machine "download-only-20210609005708-9941" does not exist. Use "docker-machine ls" to list machines. Use "docker-machine create" to add a new one.
	I0609 00:57:27.501658   10214 docker.go:132] docker version: linux-19.03.15
	I0609 00:57:27.501732   10214 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	I0609 00:57:27.579762   10214 info.go:261] docker info: {ID:LQL6:IQEY:TAE3:NG47:ROZQ:WA5O:XM2B:XDCN:3VXZ:7JF3:4DHA:WN5N Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:132 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:false KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:23 OomKillDisable:true NGoroutines:34 SystemTime:2021-06-09 00:57:27.533347573 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:4.9.0-15-amd64 OperatingSystem:Debian GNU/Linux 9 (stretch) OSType:linux Architecture:x86_64 IndexServerAddr
ess:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33742209024 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:debian-jenkins-agent-1 Labels:[] ExperimentalBuild:false ServerVersion:19.03.15 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:269548fa27e0089a8b8278fc4fc781d7f65a939b Expected:269548fa27e0089a8b8278fc4fc781d7f65a939b} RuncCommit:{ID:ff819c7e9184c13b7c2607fe6c30ae19403a7aff Expected:ff819c7e9184c13b7c2607fe6c30ae19403a7aff} InitCommit:{ID:fec3683 Expected:fec3683} SecurityOptions:[name=seccomp,profile=default] ProductLicense: Warnin
gs:[WARNING: No swap limit support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[] Warnings:<nil>}}
	I0609 00:57:27.579844   10214 docker.go:244] overlay module found
	I0609 00:57:27.582035   10214 start.go:279] selected driver: docker
	I0609 00:57:27.582053   10214 start.go:752] validating driver "docker" against &{Name:download-only-20210609005708-9941 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.23@sha256:baf6d94b2050bcbecd98994e265cf965a4f4768978620ccf5227a6dcb75ade45 Memory:8000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.7 ClusterName:download-only-20210609005708-9941 Namespace:default APIServerName:minikubeCA API
ServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.7 ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false}
	I0609 00:57:27.582623   10214 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	I0609 00:57:27.659877   10214 info.go:261] docker info: {ID:LQL6:IQEY:TAE3:NG47:ROZQ:WA5O:XM2B:XDCN:3VXZ:7JF3:4DHA:WN5N Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:132 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:false KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:23 OomKillDisable:true NGoroutines:34 SystemTime:2021-06-09 00:57:27.615246703 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:4.9.0-15-amd64 OperatingSystem:Debian GNU/Linux 9 (stretch) OSType:linux Architecture:x86_64 IndexServerAddr
ess:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33742209024 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:debian-jenkins-agent-1 Labels:[] ExperimentalBuild:false ServerVersion:19.03.15 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:269548fa27e0089a8b8278fc4fc781d7f65a939b Expected:269548fa27e0089a8b8278fc4fc781d7f65a939b} RuncCommit:{ID:ff819c7e9184c13b7c2607fe6c30ae19403a7aff Expected:ff819c7e9184c13b7c2607fe6c30ae19403a7aff} InitCommit:{ID:fec3683 Expected:fec3683} SecurityOptions:[name=seccomp,profile=default] ProductLicense: Warnin
gs:[WARNING: No swap limit support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[] Warnings:<nil>}}
	I0609 00:57:27.660440   10214 cni.go:93] Creating CNI manager for ""
	I0609 00:57:27.660457   10214 cni.go:167] CNI unnecessary in this configuration, recommending no CNI
	I0609 00:57:27.660464   10214 start_flags.go:273] config:
	{Name:download-only-20210609005708-9941 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.23@sha256:baf6d94b2050bcbecd98994e265cf965a4f4768978620ccf5227a6dcb75ade45 Memory:8000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.22.0-alpha.2 ClusterName:download-only-20210609005708-9941 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:
docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.7 ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false}
	I0609 00:57:27.662713   10214 cache.go:115] Beginning downloading kic base image for docker with docker
	I0609 00:57:27.664304   10214 preload.go:110] Checking if preload exists for k8s version v1.22.0-alpha.2 and runtime docker
	I0609 00:57:27.664331   10214 cache.go:134] Downloading gcr.io/k8s-minikube/kicbase:v0.0.23@sha256:baf6d94b2050bcbecd98994e265cf965a4f4768978620ccf5227a6dcb75ade45 to local cache
	I0609 00:57:27.664497   10214 image.go:58] Checking for gcr.io/k8s-minikube/kicbase:v0.0.23@sha256:baf6d94b2050bcbecd98994e265cf965a4f4768978620ccf5227a6dcb75ade45 in local cache directory
	I0609 00:57:27.664519   10214 image.go:61] Found gcr.io/k8s-minikube/kicbase:v0.0.23@sha256:baf6d94b2050bcbecd98994e265cf965a4f4768978620ccf5227a6dcb75ade45 in local cache directory, skipping pull
	I0609 00:57:27.664524   10214 image.go:102] gcr.io/k8s-minikube/kicbase:v0.0.23@sha256:baf6d94b2050bcbecd98994e265cf965a4f4768978620ccf5227a6dcb75ade45 exists in cache, skipping pull
	I0609 00:57:27.664548   10214 cache.go:137] successfully saved gcr.io/k8s-minikube/kicbase:v0.0.23@sha256:baf6d94b2050bcbecd98994e265cf965a4f4768978620ccf5227a6dcb75ade45 as a tarball
	I0609 00:57:27.723648   10214 preload.go:145] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/preloaded-images-k8s-v11-v1.22.0-alpha.2-docker-overlay2-amd64.tar.lz4
	I0609 00:57:27.723669   10214 cache.go:54] Caching tarball of preloaded images
	I0609 00:57:27.723845   10214 preload.go:110] Checking if preload exists for k8s version v1.22.0-alpha.2 and runtime docker
	I0609 00:57:27.725838   10214 preload.go:230] getting checksum for preloaded-images-k8s-v11-v1.22.0-alpha.2-docker-overlay2-amd64.tar.lz4 ...
	I0609 00:57:27.784372   10214 download.go:86] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/preloaded-images-k8s-v11-v1.22.0-alpha.2-docker-overlay2-amd64.tar.lz4?checksum=md5:4c5e54ea81f8273e4f880316c83fce52 -> /home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v11-v1.22.0-alpha.2-docker-overlay2-amd64.tar.lz4
	I0609 00:57:30.964747   10214 preload.go:240] saving checksum for preloaded-images-k8s-v11-v1.22.0-alpha.2-docker-overlay2-amd64.tar.lz4 ...
	I0609 00:57:30.964845   10214 preload.go:247] verifying checksumm of /home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v11-v1.22.0-alpha.2-docker-overlay2-amd64.tar.lz4 ...
	I0609 00:57:32.038154   10214 cache.go:57] Finished verifying existence of preloaded tar for  v1.22.0-alpha.2 on docker
	I0609 00:57:32.038272   10214 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/profiles/download-only-20210609005708-9941/config.json ...
	I0609 00:57:32.038448   10214 preload.go:110] Checking if preload exists for k8s version v1.22.0-alpha.2 and runtime docker
	I0609 00:57:32.038644   10214 download.go:86] Downloading: https://storage.googleapis.com/kubernetes-release/release/v1.22.0-alpha.2/bin/linux/amd64/kubectl?checksum=file:https://storage.googleapis.com/kubernetes-release/release/v1.22.0-alpha.2/bin/linux/amd64/kubectl.sha256 -> /home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/cache/linux/v1.22.0-alpha.2/kubectl
	
	* 
	* The control plane node "" does not exist.
	  To start a cluster, run: "minikube start -p download-only-20210609005708-9941"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:172: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.22.0-alpha.2/LogsDuration (0.07s)

                                                
                                    
x
+
TestDownloadOnly/DeleteAll (0.37s)

                                                
                                                
=== RUN   TestDownloadOnly/DeleteAll
aaa_download_only_test.go:189: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/DeleteAll (0.37s)

                                                
                                    
x
+
TestDownloadOnly/DeleteAlwaysSucceeds (0.23s)

                                                
                                                
=== RUN   TestDownloadOnly/DeleteAlwaysSucceeds
aaa_download_only_test.go:201: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-20210609005708-9941
--- PASS: TestDownloadOnly/DeleteAlwaysSucceeds (0.23s)

                                                
                                    
x
+
TestDownloadOnlyKic (3.36s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:226: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p download-docker-20210609005733-9941 --force --alsologtostderr --driver=docker  --container-runtime=docker
aaa_download_only_test.go:226: (dbg) Done: out/minikube-linux-amd64 start --download-only -p download-docker-20210609005733-9941 --force --alsologtostderr --driver=docker  --container-runtime=docker: (2.048048041s)
helpers_test.go:171: Cleaning up "download-docker-20210609005733-9941" profile ...
helpers_test.go:174: (dbg) Run:  out/minikube-linux-amd64 delete -p download-docker-20210609005733-9941
--- PASS: TestDownloadOnlyKic (3.36s)

                                                
                                    
x
+
TestOffline (153.01s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-linux-amd64 start -p offline-docker-20210609012512-9941 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=docker  --container-runtime=docker

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Done: out/minikube-linux-amd64 start -p offline-docker-20210609012512-9941 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=docker  --container-runtime=docker: (2m30.277557821s)
helpers_test.go:171: Cleaning up "offline-docker-20210609012512-9941" profile ...
helpers_test.go:174: (dbg) Run:  out/minikube-linux-amd64 delete -p offline-docker-20210609012512-9941
helpers_test.go:174: (dbg) Done: out/minikube-linux-amd64 delete -p offline-docker-20210609012512-9941: (2.733289121s)
--- PASS: TestOffline (153.01s)

                                                
                                    
x
+
TestAddons/parallel/Registry (14.59s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:297: registry stabilized in 13.598685ms

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:299: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...

                                                
                                                
=== CONT  TestAddons/parallel/Registry
helpers_test.go:335: "registry-h26tt" [71354605-8cf3-46c3-9b35-96a35c3e221d] Running

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:299: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 5.013428964s

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:302: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...

                                                
                                                
=== CONT  TestAddons/parallel/Registry
helpers_test.go:335: "registry-proxy-4fcb6" [078fab87-b1cd-495b-9498-28f1496c0019] Running

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:302: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.006616893s
addons_test.go:307: (dbg) Run:  kubectl --context addons-20210609005737-9941 delete po -l run=registry-test --now

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:312: (dbg) Run:  kubectl --context addons-20210609005737-9941 run --rm registry-test --restart=Never --image=busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:312: (dbg) Done: kubectl --context addons-20210609005737-9941 run --rm registry-test --restart=Never --image=busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (3.97261403s)
addons_test.go:326: (dbg) Run:  out/minikube-linux-amd64 -p addons-20210609005737-9941 ip
2021/06/09 01:00:57 [DEBUG] GET http://192.168.49.2:5000
addons_test.go:355: (dbg) Run:  out/minikube-linux-amd64 -p addons-20210609005737-9941 addons disable registry --alsologtostderr -v=1
--- PASS: TestAddons/parallel/Registry (14.59s)

                                                
                                    
x
+
TestAddons/parallel/Ingress (44.81s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:158: (dbg) TestAddons/parallel/Ingress: waiting 12m0s for pods matching "app.kubernetes.io/name=ingress-nginx" in namespace "ingress-nginx" ...
helpers_test.go:335: "ingress-nginx-admission-create-tls5x" [193f1284-7440-4aef-a9b6-f60684bef13f] Succeeded: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:158: (dbg) TestAddons/parallel/Ingress: app.kubernetes.io/name=ingress-nginx healthy within 58.650287ms
addons_test.go:165: (dbg) Run:  kubectl --context addons-20210609005737-9941 replace --force -f testdata/nginx-ingv1beta.yaml
addons_test.go:170: kubectl --context addons-20210609005737-9941 replace --force -f testdata/nginx-ingv1beta.yaml: unexpected stderr: Warning: networking.k8s.io/v1beta1 Ingress is deprecated in v1.19+, unavailable in v1.22+; use networking.k8s.io/v1 Ingress
(may be temporary)
addons_test.go:180: (dbg) Run:  kubectl --context addons-20210609005737-9941 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:185: (dbg) TestAddons/parallel/Ingress: waiting 4m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:335: "nginx" [7da4622c-9e29-48bc-99ae-5929a6a6f4e9] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
helpers_test.go:335: "nginx" [7da4622c-9e29-48bc-99ae-5929a6a6f4e9] Running

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:185: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 14.010052137s
addons_test.go:204: (dbg) Run:  out/minikube-linux-amd64 -p addons-20210609005737-9941 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:230: (dbg) Run:  kubectl --context addons-20210609005737-9941 replace --force -f testdata/nginx-ingv1.yaml
addons_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p addons-20210609005737-9941 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:278: (dbg) Run:  out/minikube-linux-amd64 -p addons-20210609005737-9941 addons disable ingress --alsologtostderr -v=1

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:278: (dbg) Done: out/minikube-linux-amd64 -p addons-20210609005737-9941 addons disable ingress --alsologtostderr -v=1: (29.110176169s)
--- PASS: TestAddons/parallel/Ingress (44.81s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (5.59s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:374: metrics-server stabilized in 14.108607ms

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:376: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
helpers_test.go:335: "metrics-server-7894db45f8-wfr6c" [6b9a98c3-9da0-47f6-95ae-026323989738] Running

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:376: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.014298866s
addons_test.go:382: (dbg) Run:  kubectl --context addons-20210609005737-9941 top pods -n kube-system

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:399: (dbg) Run:  out/minikube-linux-amd64 -p addons-20210609005737-9941 addons disable metrics-server --alsologtostderr -v=1
--- PASS: TestAddons/parallel/MetricsServer (5.59s)

                                                
                                    
x
+
TestAddons/parallel/HelmTiller (12.51s)

                                                
                                                
=== RUN   TestAddons/parallel/HelmTiller
=== PAUSE TestAddons/parallel/HelmTiller

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:423: tiller-deploy stabilized in 1.461134ms
addons_test.go:425: (dbg) TestAddons/parallel/HelmTiller: waiting 6m0s for pods matching "app=helm" in namespace "kube-system" ...
helpers_test.go:335: "tiller-deploy-7c86b7fbdf-67s8r" [9422a238-8296-485c-8141-48e1aa7100b0] Running

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:425: (dbg) TestAddons/parallel/HelmTiller: app=helm healthy within 5.007077179s
addons_test.go:440: (dbg) Run:  kubectl --context addons-20210609005737-9941 run --rm helm-test --restart=Never --image=alpine/helm:2.16.3 -it --namespace=kube-system --serviceaccount=tiller -- version

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:440: (dbg) Done: kubectl --context addons-20210609005737-9941 run --rm helm-test --restart=Never --image=alpine/helm:2.16.3 -it --namespace=kube-system --serviceaccount=tiller -- version: (6.76301501s)
addons_test.go:445: kubectl --context addons-20210609005737-9941 run --rm helm-test --restart=Never --image=alpine/helm:2.16.3 -it --namespace=kube-system --serviceaccount=tiller -- version: unexpected stderr: Unable to use a TTY - input is not a terminal or the right kind of file
If you don't see a command prompt, try pressing enter.
Error attaching, falling back to logs: 
addons_test.go:457: (dbg) Run:  out/minikube-linux-amd64 -p addons-20210609005737-9941 addons disable helm-tiller --alsologtostderr -v=1
--- PASS: TestAddons/parallel/HelmTiller (12.51s)

                                                
                                    
x
+
TestAddons/parallel/Olm (47.88s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:476: catalog-operator stabilized in 14.275576ms

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:480: olm-operator stabilized in 17.124773ms

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:484: packageserver stabilized in 20.289769ms
addons_test.go:486: (dbg) TestAddons/parallel/Olm: waiting 6m0s for pods matching "app=catalog-operator" in namespace "olm" ...

                                                
                                                
=== CONT  TestAddons/parallel/Olm
helpers_test.go:335: "catalog-operator-7544db6ccd-dkf8s" [35076821-a47b-4c37-989a-59092a16d1fb] Running

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:486: (dbg) TestAddons/parallel/Olm: app=catalog-operator healthy within 5.007626428s

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:489: (dbg) TestAddons/parallel/Olm: waiting 6m0s for pods matching "app=olm-operator" in namespace "olm" ...

                                                
                                                
=== CONT  TestAddons/parallel/Olm
helpers_test.go:335: "olm-operator-79b67c565d-ltj4b" [fdc9e76c-b622-40ae-a990-cb2a23c1b153] Running

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:489: (dbg) TestAddons/parallel/Olm: app=olm-operator healthy within 5.005548469s

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:492: (dbg) TestAddons/parallel/Olm: waiting 6m0s for pods matching "app=packageserver" in namespace "olm" ...
helpers_test.go:335: "packageserver-6976c9588-2vfqk" [c5c5ce3b-6695-4c3e-b673-6d6317c333b7] Running
helpers_test.go:335: "packageserver-6976c9588-vn2rv" [ea0b81e5-d827-4310-b9fa-3185174628eb] Running

                                                
                                                
=== CONT  TestAddons/parallel/Olm
helpers_test.go:335: "packageserver-6976c9588-2vfqk" [c5c5ce3b-6695-4c3e-b673-6d6317c333b7] Running
helpers_test.go:335: "packageserver-6976c9588-vn2rv" [ea0b81e5-d827-4310-b9fa-3185174628eb] Running

                                                
                                                
=== CONT  TestAddons/parallel/Olm
helpers_test.go:335: "packageserver-6976c9588-2vfqk" [c5c5ce3b-6695-4c3e-b673-6d6317c333b7] Running
helpers_test.go:335: "packageserver-6976c9588-vn2rv" [ea0b81e5-d827-4310-b9fa-3185174628eb] Running

                                                
                                                
=== CONT  TestAddons/parallel/Olm
helpers_test.go:335: "packageserver-6976c9588-2vfqk" [c5c5ce3b-6695-4c3e-b673-6d6317c333b7] Running
helpers_test.go:335: "packageserver-6976c9588-vn2rv" [ea0b81e5-d827-4310-b9fa-3185174628eb] Running
helpers_test.go:335: "packageserver-6976c9588-2vfqk" [c5c5ce3b-6695-4c3e-b673-6d6317c333b7] Running
helpers_test.go:335: "packageserver-6976c9588-vn2rv" [ea0b81e5-d827-4310-b9fa-3185174628eb] Running

                                                
                                                
=== CONT  TestAddons/parallel/Olm
helpers_test.go:335: "packageserver-6976c9588-2vfqk" [c5c5ce3b-6695-4c3e-b673-6d6317c333b7] Running
addons_test.go:492: (dbg) TestAddons/parallel/Olm: app=packageserver healthy within 5.010266917s
addons_test.go:495: (dbg) TestAddons/parallel/Olm: waiting 6m0s for pods matching "olm.catalogSource=operatorhubio-catalog" in namespace "olm" ...
helpers_test.go:335: "operatorhubio-catalog-tdq5w" [afd9dd22-b985-4e63-bb32-675bbec12f87] Running

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:495: (dbg) TestAddons/parallel/Olm: olm.catalogSource=operatorhubio-catalog healthy within 5.016957403s
addons_test.go:500: (dbg) Run:  kubectl --context addons-20210609005737-9941 create -f testdata/etcd.yaml
addons_test.go:507: (dbg) Run:  kubectl --context addons-20210609005737-9941 get csv -n my-etcd
addons_test.go:512: kubectl --context addons-20210609005737-9941 get csv -n my-etcd: unexpected stderr: No resources found in my-etcd namespace.

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:507: (dbg) Run:  kubectl --context addons-20210609005737-9941 get csv -n my-etcd
addons_test.go:512: kubectl --context addons-20210609005737-9941 get csv -n my-etcd: unexpected stderr: No resources found in my-etcd namespace.

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:507: (dbg) Run:  kubectl --context addons-20210609005737-9941 get csv -n my-etcd
addons_test.go:512: kubectl --context addons-20210609005737-9941 get csv -n my-etcd: unexpected stderr: No resources found in my-etcd namespace.

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:507: (dbg) Run:  kubectl --context addons-20210609005737-9941 get csv -n my-etcd

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:507: (dbg) Run:  kubectl --context addons-20210609005737-9941 get csv -n my-etcd
--- PASS: TestAddons/parallel/Olm (47.88s)

                                                
                                    
x
+
TestAddons/parallel/CSI (77.31s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:539: csi-hostpath-driver pods stabilized in 4.835374ms
addons_test.go:542: (dbg) Run:  kubectl --context addons-20210609005737-9941 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:547: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:385: (dbg) Run:  kubectl --context addons-20210609005737-9941 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:552: (dbg) Run:  kubectl --context addons-20210609005737-9941 create -f testdata/csi-hostpath-driver/pv-pod.yaml

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:557: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:335: "task-pv-pod" [a7bd2b78-da75-471e-acb4-eedbe20103e0] Pending
helpers_test.go:335: "task-pv-pod" [a7bd2b78-da75-471e-acb4-eedbe20103e0] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])

                                                
                                                
=== CONT  TestAddons/parallel/CSI
helpers_test.go:335: "task-pv-pod" [a7bd2b78-da75-471e-acb4-eedbe20103e0] Running

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:557: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 25.058804192s
addons_test.go:562: (dbg) Run:  kubectl --context addons-20210609005737-9941 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:567: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:410: (dbg) Run:  kubectl --context addons-20210609005737-9941 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:418: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: 
helpers_test.go:410: (dbg) Run:  kubectl --context addons-20210609005737-9941 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:572: (dbg) Run:  kubectl --context addons-20210609005737-9941 delete pod task-pv-pod

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:572: (dbg) Done: kubectl --context addons-20210609005737-9941 delete pod task-pv-pod: (12.539745536s)
addons_test.go:578: (dbg) Run:  kubectl --context addons-20210609005737-9941 delete pvc hpvc
addons_test.go:584: (dbg) Run:  kubectl --context addons-20210609005737-9941 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:589: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:385: (dbg) Run:  kubectl --context addons-20210609005737-9941 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:594: (dbg) Run:  kubectl --context addons-20210609005737-9941 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:599: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:335: "task-pv-pod-restore" [cc8cbd74-73c1-4112-a09e-e7d49deb1454] Pending
helpers_test.go:335: "task-pv-pod-restore" [cc8cbd74-73c1-4112-a09e-e7d49deb1454] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])

                                                
                                                
=== CONT  TestAddons/parallel/CSI
helpers_test.go:335: "task-pv-pod-restore" [cc8cbd74-73c1-4112-a09e-e7d49deb1454] Running
addons_test.go:599: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 17.005259585s
addons_test.go:604: (dbg) Run:  kubectl --context addons-20210609005737-9941 delete pod task-pv-pod-restore
addons_test.go:604: (dbg) Done: kubectl --context addons-20210609005737-9941 delete pod task-pv-pod-restore: (12.486427848s)
addons_test.go:608: (dbg) Run:  kubectl --context addons-20210609005737-9941 delete pvc hpvc-restore
addons_test.go:612: (dbg) Run:  kubectl --context addons-20210609005737-9941 delete volumesnapshot new-snapshot-demo
addons_test.go:616: (dbg) Run:  out/minikube-linux-amd64 -p addons-20210609005737-9941 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:616: (dbg) Done: out/minikube-linux-amd64 -p addons-20210609005737-9941 addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.743448641s)
addons_test.go:620: (dbg) Run:  out/minikube-linux-amd64 -p addons-20210609005737-9941 addons disable volumesnapshots --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CSI (77.31s)

                                                
                                    
x
+
TestAddons/parallel/GCPAuth (34.9s)

                                                
                                                
=== RUN   TestAddons/parallel/GCPAuth
=== PAUSE TestAddons/parallel/GCPAuth

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/GCPAuth
addons_test.go:631: (dbg) Run:  kubectl --context addons-20210609005737-9941 create -f testdata/busybox.yaml

                                                
                                                
=== CONT  TestAddons/parallel/GCPAuth
addons_test.go:637: (dbg) TestAddons/parallel/GCPAuth: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:335: "busybox" [3a488e3f-b831-40ab-ae11-743d12ddf04b] Pending
helpers_test.go:335: "busybox" [3a488e3f-b831-40ab-ae11-743d12ddf04b] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:335: "busybox" [3a488e3f-b831-40ab-ae11-743d12ddf04b] Running

                                                
                                                
=== CONT  TestAddons/parallel/GCPAuth
addons_test.go:637: (dbg) TestAddons/parallel/GCPAuth: integration-test=busybox healthy within 10.005765316s
addons_test.go:643: (dbg) Run:  kubectl --context addons-20210609005737-9941 exec busybox -- /bin/sh -c "printenv GOOGLE_APPLICATION_CREDENTIALS"

                                                
                                                
=== CONT  TestAddons/parallel/GCPAuth
addons_test.go:680: (dbg) Run:  kubectl --context addons-20210609005737-9941 exec busybox -- /bin/sh -c "printenv GOOGLE_CLOUD_PROJECT"
addons_test.go:696: (dbg) Run:  kubectl --context addons-20210609005737-9941 apply -f testdata/private-image.yaml

                                                
                                                
=== CONT  TestAddons/parallel/GCPAuth
addons_test.go:703: (dbg) TestAddons/parallel/GCPAuth: waiting 8m0s for pods matching "integration-test=private-image" in namespace "default" ...
helpers_test.go:335: "private-image-7ff9c8c74f-qdpwm" [a71258b0-bfb0-4b12-9d33-b5371a21c640] Pending

                                                
                                                
=== CONT  TestAddons/parallel/GCPAuth
helpers_test.go:335: "private-image-7ff9c8c74f-qdpwm" [a71258b0-bfb0-4b12-9d33-b5371a21c640] Pending / Ready:ContainersNotReady (containers with unready status: [private-image]) / ContainersReady:ContainersNotReady (containers with unready status: [private-image])

                                                
                                                
=== CONT  TestAddons/parallel/GCPAuth
helpers_test.go:335: "private-image-7ff9c8c74f-qdpwm" [a71258b0-bfb0-4b12-9d33-b5371a21c640] Running

                                                
                                                
=== CONT  TestAddons/parallel/GCPAuth
addons_test.go:703: (dbg) TestAddons/parallel/GCPAuth: integration-test=private-image healthy within 17.010020948s
addons_test.go:709: (dbg) Run:  out/minikube-linux-amd64 -p addons-20210609005737-9941 addons disable gcp-auth --alsologtostderr -v=1

                                                
                                                
=== CONT  TestAddons/parallel/GCPAuth
addons_test.go:709: (dbg) Done: out/minikube-linux-amd64 -p addons-20210609005737-9941 addons disable gcp-auth --alsologtostderr -v=1: (6.815318894s)
--- PASS: TestAddons/parallel/GCPAuth (34.90s)

                                                
                                    
x
+
TestCertOptions (37.93s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:47: (dbg) Run:  out/minikube-linux-amd64 start -p cert-options-20210609012856-9941 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=docker

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:47: (dbg) Done: out/minikube-linux-amd64 start -p cert-options-20210609012856-9941 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=docker: (34.989232498s)
cert_options_test.go:58: (dbg) Run:  out/minikube-linux-amd64 -p cert-options-20210609012856-9941 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:73: (dbg) Run:  kubectl --context cert-options-20210609012856-9941 config view
helpers_test.go:171: Cleaning up "cert-options-20210609012856-9941" profile ...
helpers_test.go:174: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-options-20210609012856-9941
helpers_test.go:174: (dbg) Done: out/minikube-linux-amd64 delete -p cert-options-20210609012856-9941: (2.570652929s)
--- PASS: TestCertOptions (37.93s)

                                                
                                    
x
+
TestDockerFlags (46.99s)

                                                
                                                
=== RUN   TestDockerFlags
=== PAUSE TestDockerFlags

                                                
                                                

                                                
                                                
=== CONT  TestDockerFlags
docker_test.go:45: (dbg) Run:  out/minikube-linux-amd64 start -p docker-flags-20210609012816-9941 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=docker  --container-runtime=docker

                                                
                                                
=== CONT  TestDockerFlags
docker_test.go:45: (dbg) Done: out/minikube-linux-amd64 start -p docker-flags-20210609012816-9941 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=docker  --container-runtime=docker: (41.591674226s)
docker_test.go:50: (dbg) Run:  out/minikube-linux-amd64 -p docker-flags-20210609012816-9941 ssh "sudo systemctl show docker --property=Environment --no-pager"
docker_test.go:61: (dbg) Run:  out/minikube-linux-amd64 -p docker-flags-20210609012816-9941 ssh "sudo systemctl show docker --property=ExecStart --no-pager"

                                                
                                                
=== CONT  TestDockerFlags
helpers_test.go:171: Cleaning up "docker-flags-20210609012816-9941" profile ...
helpers_test.go:174: (dbg) Run:  out/minikube-linux-amd64 delete -p docker-flags-20210609012816-9941

                                                
                                                
=== CONT  TestDockerFlags
helpers_test.go:174: (dbg) Done: out/minikube-linux-amd64 delete -p docker-flags-20210609012816-9941: (4.590982993s)
--- PASS: TestDockerFlags (46.99s)

                                                
                                    
x
+
TestForceSystemdFlag (43.54s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:85: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-flag-20210609012818-9941 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=docker

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:85: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-flag-20210609012818-9941 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=docker: (40.181258169s)
docker_test.go:102: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-flag-20210609012818-9941 ssh "docker info --format {{.CgroupDriver}}"

                                                
                                                
=== CONT  TestForceSystemdFlag
helpers_test.go:171: Cleaning up "force-systemd-flag-20210609012818-9941" profile ...
helpers_test.go:174: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-flag-20210609012818-9941

                                                
                                                
=== CONT  TestForceSystemdFlag
helpers_test.go:174: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-flag-20210609012818-9941: (2.925741776s)
--- PASS: TestForceSystemdFlag (43.54s)

                                                
                                    
x
+
TestForceSystemdEnv (32.95s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:136: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-env-20210609012745-9941 --memory=2048 --alsologtostderr -v=5 --driver=docker  --container-runtime=docker

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:136: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-env-20210609012745-9941 --memory=2048 --alsologtostderr -v=5 --driver=docker  --container-runtime=docker: (25.354908479s)
docker_test.go:102: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-env-20210609012745-9941 ssh "docker info --format {{.CgroupDriver}}"

                                                
                                                
=== CONT  TestForceSystemdEnv
helpers_test.go:174: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-env-20210609012745-9941: (6.556498933s)
--- PASS: TestForceSystemdEnv (32.95s)

                                                
                                    
x
+
TestErrorSpam/start (59.06s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:210: Cleaning up 1 logfile(s) ...
error_spam_test.go:168: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20210609010229-9941 --log_dir /tmp/nospam-20210609010229-9941 start --dry-run
error_spam_test.go:168: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20210609010229-9941 --log_dir /tmp/nospam-20210609010229-9941 start --dry-run
error_spam_test.go:168: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20210609010229-9941 --log_dir /tmp/nospam-20210609010229-9941 start --dry-run
error_spam_test.go:168: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20210609010229-9941 --log_dir /tmp/nospam-20210609010229-9941 start --dry-run
error_spam_test.go:168: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20210609010229-9941 --log_dir /tmp/nospam-20210609010229-9941 start --dry-run
error_spam_test.go:168: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20210609010229-9941 --log_dir /tmp/nospam-20210609010229-9941 start --dry-run
error_spam_test.go:168: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20210609010229-9941 --log_dir /tmp/nospam-20210609010229-9941 start --dry-run
error_spam_test.go:168: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20210609010229-9941 --log_dir /tmp/nospam-20210609010229-9941 start --dry-run
error_spam_test.go:168: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20210609010229-9941 --log_dir /tmp/nospam-20210609010229-9941 start --dry-run
error_spam_test.go:168: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20210609010229-9941 --log_dir /tmp/nospam-20210609010229-9941 start --dry-run
error_spam_test.go:168: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20210609010229-9941 --log_dir /tmp/nospam-20210609010229-9941 start --dry-run
error_spam_test.go:168: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20210609010229-9941 --log_dir /tmp/nospam-20210609010229-9941 start --dry-run
error_spam_test.go:168: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20210609010229-9941 --log_dir /tmp/nospam-20210609010229-9941 start --dry-run
error_spam_test.go:168: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20210609010229-9941 --log_dir /tmp/nospam-20210609010229-9941 start --dry-run
error_spam_test.go:168: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20210609010229-9941 --log_dir /tmp/nospam-20210609010229-9941 start --dry-run
error_spam_test.go:168: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20210609010229-9941 --log_dir /tmp/nospam-20210609010229-9941 start --dry-run
error_spam_test.go:168: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20210609010229-9941 --log_dir /tmp/nospam-20210609010229-9941 start --dry-run
error_spam_test.go:168: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20210609010229-9941 --log_dir /tmp/nospam-20210609010229-9941 start --dry-run
error_spam_test.go:168: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20210609010229-9941 --log_dir /tmp/nospam-20210609010229-9941 start --dry-run
error_spam_test.go:168: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20210609010229-9941 --log_dir /tmp/nospam-20210609010229-9941 start --dry-run
error_spam_test.go:168: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20210609010229-9941 --log_dir /tmp/nospam-20210609010229-9941 start --dry-run
error_spam_test.go:168: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20210609010229-9941 --log_dir /tmp/nospam-20210609010229-9941 start --dry-run
error_spam_test.go:168: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20210609010229-9941 --log_dir /tmp/nospam-20210609010229-9941 start --dry-run
error_spam_test.go:168: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20210609010229-9941 --log_dir /tmp/nospam-20210609010229-9941 start --dry-run
error_spam_test.go:168: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20210609010229-9941 --log_dir /tmp/nospam-20210609010229-9941 start --dry-run
error_spam_test.go:168: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20210609010229-9941 --log_dir /tmp/nospam-20210609010229-9941 start --dry-run
error_spam_test.go:168: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20210609010229-9941 --log_dir /tmp/nospam-20210609010229-9941 start --dry-run
error_spam_test.go:168: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20210609010229-9941 --log_dir /tmp/nospam-20210609010229-9941 start --dry-run
error_spam_test.go:168: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20210609010229-9941 --log_dir /tmp/nospam-20210609010229-9941 start --dry-run
error_spam_test.go:168: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20210609010229-9941 --log_dir /tmp/nospam-20210609010229-9941 start --dry-run
error_spam_test.go:168: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20210609010229-9941 --log_dir /tmp/nospam-20210609010229-9941 start --dry-run
error_spam_test.go:168: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20210609010229-9941 --log_dir /tmp/nospam-20210609010229-9941 start --dry-run
error_spam_test.go:168: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20210609010229-9941 --log_dir /tmp/nospam-20210609010229-9941 start --dry-run
error_spam_test.go:168: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20210609010229-9941 --log_dir /tmp/nospam-20210609010229-9941 start --dry-run
error_spam_test.go:168: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20210609010229-9941 --log_dir /tmp/nospam-20210609010229-9941 start --dry-run
error_spam_test.go:168: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20210609010229-9941 --log_dir /tmp/nospam-20210609010229-9941 start --dry-run
error_spam_test.go:168: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20210609010229-9941 --log_dir /tmp/nospam-20210609010229-9941 start --dry-run
error_spam_test.go:168: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20210609010229-9941 --log_dir /tmp/nospam-20210609010229-9941 start --dry-run
error_spam_test.go:168: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20210609010229-9941 --log_dir /tmp/nospam-20210609010229-9941 start --dry-run
error_spam_test.go:168: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20210609010229-9941 --log_dir /tmp/nospam-20210609010229-9941 start --dry-run
error_spam_test.go:168: (dbg) Done: out/minikube-linux-amd64 -p nospam-20210609010229-9941 --log_dir /tmp/nospam-20210609010229-9941 start --dry-run: (4.371309576s)
error_spam_test.go:168: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20210609010229-9941 --log_dir /tmp/nospam-20210609010229-9941 start --dry-run
error_spam_test.go:168: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20210609010229-9941 --log_dir /tmp/nospam-20210609010229-9941 start --dry-run
error_spam_test.go:168: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20210609010229-9941 --log_dir /tmp/nospam-20210609010229-9941 start --dry-run
error_spam_test.go:168: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20210609010229-9941 --log_dir /tmp/nospam-20210609010229-9941 start --dry-run
error_spam_test.go:168: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20210609010229-9941 --log_dir /tmp/nospam-20210609010229-9941 start --dry-run
error_spam_test.go:168: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20210609010229-9941 --log_dir /tmp/nospam-20210609010229-9941 start --dry-run
error_spam_test.go:168: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20210609010229-9941 --log_dir /tmp/nospam-20210609010229-9941 start --dry-run
error_spam_test.go:168: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20210609010229-9941 --log_dir /tmp/nospam-20210609010229-9941 start --dry-run
error_spam_test.go:168: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20210609010229-9941 --log_dir /tmp/nospam-20210609010229-9941 start --dry-run
error_spam_test.go:168: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20210609010229-9941 --log_dir /tmp/nospam-20210609010229-9941 start --dry-run
error_spam_test.go:168: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20210609010229-9941 --log_dir /tmp/nospam-20210609010229-9941 start --dry-run
error_spam_test.go:168: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20210609010229-9941 --log_dir /tmp/nospam-20210609010229-9941 start --dry-run
error_spam_test.go:168: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20210609010229-9941 --log_dir /tmp/nospam-20210609010229-9941 start --dry-run
error_spam_test.go:168: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20210609010229-9941 --log_dir /tmp/nospam-20210609010229-9941 start --dry-run
error_spam_test.go:168: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20210609010229-9941 --log_dir /tmp/nospam-20210609010229-9941 start --dry-run
error_spam_test.go:168: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20210609010229-9941 --log_dir /tmp/nospam-20210609010229-9941 start --dry-run
error_spam_test.go:168: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20210609010229-9941 --log_dir /tmp/nospam-20210609010229-9941 start --dry-run
error_spam_test.go:168: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20210609010229-9941 --log_dir /tmp/nospam-20210609010229-9941 start --dry-run
error_spam_test.go:168: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20210609010229-9941 --log_dir /tmp/nospam-20210609010229-9941 start --dry-run
error_spam_test.go:168: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20210609010229-9941 --log_dir /tmp/nospam-20210609010229-9941 start --dry-run
error_spam_test.go:168: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20210609010229-9941 --log_dir /tmp/nospam-20210609010229-9941 start --dry-run
error_spam_test.go:168: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20210609010229-9941 --log_dir /tmp/nospam-20210609010229-9941 start --dry-run
error_spam_test.go:168: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20210609010229-9941 --log_dir /tmp/nospam-20210609010229-9941 start --dry-run
error_spam_test.go:168: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20210609010229-9941 --log_dir /tmp/nospam-20210609010229-9941 start --dry-run
error_spam_test.go:168: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20210609010229-9941 --log_dir /tmp/nospam-20210609010229-9941 start --dry-run
error_spam_test.go:168: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20210609010229-9941 --log_dir /tmp/nospam-20210609010229-9941 start --dry-run
error_spam_test.go:168: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20210609010229-9941 --log_dir /tmp/nospam-20210609010229-9941 start --dry-run
error_spam_test.go:168: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20210609010229-9941 --log_dir /tmp/nospam-20210609010229-9941 start --dry-run
error_spam_test.go:168: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20210609010229-9941 --log_dir /tmp/nospam-20210609010229-9941 start --dry-run
error_spam_test.go:168: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20210609010229-9941 --log_dir /tmp/nospam-20210609010229-9941 start --dry-run
error_spam_test.go:168: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20210609010229-9941 --log_dir /tmp/nospam-20210609010229-9941 start --dry-run
error_spam_test.go:168: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20210609010229-9941 --log_dir /tmp/nospam-20210609010229-9941 start --dry-run
error_spam_test.go:168: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20210609010229-9941 --log_dir /tmp/nospam-20210609010229-9941 start --dry-run
error_spam_test.go:168: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20210609010229-9941 --log_dir /tmp/nospam-20210609010229-9941 start --dry-run
error_spam_test.go:168: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20210609010229-9941 --log_dir /tmp/nospam-20210609010229-9941 start --dry-run
error_spam_test.go:168: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20210609010229-9941 --log_dir /tmp/nospam-20210609010229-9941 start --dry-run
error_spam_test.go:168: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20210609010229-9941 --log_dir /tmp/nospam-20210609010229-9941 start --dry-run
error_spam_test.go:168: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20210609010229-9941 --log_dir /tmp/nospam-20210609010229-9941 start --dry-run
error_spam_test.go:168: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20210609010229-9941 --log_dir /tmp/nospam-20210609010229-9941 start --dry-run
error_spam_test.go:168: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20210609010229-9941 --log_dir /tmp/nospam-20210609010229-9941 start --dry-run
error_spam_test.go:168: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20210609010229-9941 --log_dir /tmp/nospam-20210609010229-9941 start --dry-run
error_spam_test.go:168: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20210609010229-9941 --log_dir /tmp/nospam-20210609010229-9941 start --dry-run
error_spam_test.go:168: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20210609010229-9941 --log_dir /tmp/nospam-20210609010229-9941 start --dry-run
error_spam_test.go:168: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20210609010229-9941 --log_dir /tmp/nospam-20210609010229-9941 start --dry-run
error_spam_test.go:168: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20210609010229-9941 --log_dir /tmp/nospam-20210609010229-9941 start --dry-run
error_spam_test.go:168: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20210609010229-9941 --log_dir /tmp/nospam-20210609010229-9941 start --dry-run
error_spam_test.go:168: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20210609010229-9941 --log_dir /tmp/nospam-20210609010229-9941 start --dry-run
error_spam_test.go:168: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20210609010229-9941 --log_dir /tmp/nospam-20210609010229-9941 start --dry-run
error_spam_test.go:168: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20210609010229-9941 --log_dir /tmp/nospam-20210609010229-9941 start --dry-run
error_spam_test.go:168: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20210609010229-9941 --log_dir /tmp/nospam-20210609010229-9941 start --dry-run
error_spam_test.go:168: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20210609010229-9941 --log_dir /tmp/nospam-20210609010229-9941 start --dry-run
error_spam_test.go:168: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20210609010229-9941 --log_dir /tmp/nospam-20210609010229-9941 start --dry-run
error_spam_test.go:168: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20210609010229-9941 --log_dir /tmp/nospam-20210609010229-9941 start --dry-run
error_spam_test.go:168: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20210609010229-9941 --log_dir /tmp/nospam-20210609010229-9941 start --dry-run
error_spam_test.go:168: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20210609010229-9941 --log_dir /tmp/nospam-20210609010229-9941 start --dry-run
error_spam_test.go:168: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20210609010229-9941 --log_dir /tmp/nospam-20210609010229-9941 start --dry-run
error_spam_test.go:168: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20210609010229-9941 --log_dir /tmp/nospam-20210609010229-9941 start --dry-run
error_spam_test.go:168: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20210609010229-9941 --log_dir /tmp/nospam-20210609010229-9941 start --dry-run
error_spam_test.go:168: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20210609010229-9941 --log_dir /tmp/nospam-20210609010229-9941 start --dry-run
error_spam_test.go:168: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20210609010229-9941 --log_dir /tmp/nospam-20210609010229-9941 start --dry-run
error_spam_test.go:168: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20210609010229-9941 --log_dir /tmp/nospam-20210609010229-9941 start --dry-run
error_spam_test.go:168: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20210609010229-9941 --log_dir /tmp/nospam-20210609010229-9941 start --dry-run
error_spam_test.go:168: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20210609010229-9941 --log_dir /tmp/nospam-20210609010229-9941 start --dry-run
error_spam_test.go:168: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20210609010229-9941 --log_dir /tmp/nospam-20210609010229-9941 start --dry-run
error_spam_test.go:168: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20210609010229-9941 --log_dir /tmp/nospam-20210609010229-9941 start --dry-run
error_spam_test.go:168: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20210609010229-9941 --log_dir /tmp/nospam-20210609010229-9941 start --dry-run
error_spam_test.go:168: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20210609010229-9941 --log_dir /tmp/nospam-20210609010229-9941 start --dry-run
error_spam_test.go:168: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20210609010229-9941 --log_dir /tmp/nospam-20210609010229-9941 start --dry-run
error_spam_test.go:168: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20210609010229-9941 --log_dir /tmp/nospam-20210609010229-9941 start --dry-run
error_spam_test.go:168: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20210609010229-9941 --log_dir /tmp/nospam-20210609010229-9941 start --dry-run
error_spam_test.go:168: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20210609010229-9941 --log_dir /tmp/nospam-20210609010229-9941 start --dry-run
error_spam_test.go:168: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20210609010229-9941 --log_dir /tmp/nospam-20210609010229-9941 start --dry-run
error_spam_test.go:168: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20210609010229-9941 --log_dir /tmp/nospam-20210609010229-9941 start --dry-run
error_spam_test.go:168: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20210609010229-9941 --log_dir /tmp/nospam-20210609010229-9941 start --dry-run
error_spam_test.go:168: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20210609010229-9941 --log_dir /tmp/nospam-20210609010229-9941 start --dry-run
error_spam_test.go:168: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20210609010229-9941 --log_dir /tmp/nospam-20210609010229-9941 start --dry-run
error_spam_test.go:168: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20210609010229-9941 --log_dir /tmp/nospam-20210609010229-9941 start --dry-run
error_spam_test.go:168: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20210609010229-9941 --log_dir /tmp/nospam-20210609010229-9941 start --dry-run
error_spam_test.go:168: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20210609010229-9941 --log_dir /tmp/nospam-20210609010229-9941 start --dry-run
error_spam_test.go:168: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20210609010229-9941 --log_dir /tmp/nospam-20210609010229-9941 start --dry-run
error_spam_test.go:168: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20210609010229-9941 --log_dir /tmp/nospam-20210609010229-9941 start --dry-run
error_spam_test.go:168: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20210609010229-9941 --log_dir /tmp/nospam-20210609010229-9941 start --dry-run
error_spam_test.go:168: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20210609010229-9941 --log_dir /tmp/nospam-20210609010229-9941 start --dry-run
error_spam_test.go:168: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20210609010229-9941 --log_dir /tmp/nospam-20210609010229-9941 start --dry-run
error_spam_test.go:168: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20210609010229-9941 --log_dir /tmp/nospam-20210609010229-9941 start --dry-run
error_spam_test.go:168: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20210609010229-9941 --log_dir /tmp/nospam-20210609010229-9941 start --dry-run
error_spam_test.go:168: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20210609010229-9941 --log_dir /tmp/nospam-20210609010229-9941 start --dry-run
error_spam_test.go:168: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20210609010229-9941 --log_dir /tmp/nospam-20210609010229-9941 start --dry-run
error_spam_test.go:168: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20210609010229-9941 --log_dir /tmp/nospam-20210609010229-9941 start --dry-run
error_spam_test.go:168: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20210609010229-9941 --log_dir /tmp/nospam-20210609010229-9941 start --dry-run
error_spam_test.go:168: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20210609010229-9941 --log_dir /tmp/nospam-20210609010229-9941 start --dry-run
error_spam_test.go:168: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20210609010229-9941 --log_dir /tmp/nospam-20210609010229-9941 start --dry-run
error_spam_test.go:168: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20210609010229-9941 --log_dir /tmp/nospam-20210609010229-9941 start --dry-run
error_spam_test.go:168: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20210609010229-9941 --log_dir /tmp/nospam-20210609010229-9941 start --dry-run
error_spam_test.go:168: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20210609010229-9941 --log_dir /tmp/nospam-20210609010229-9941 start --dry-run
error_spam_test.go:168: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20210609010229-9941 --log_dir /tmp/nospam-20210609010229-9941 start --dry-run
error_spam_test.go:168: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20210609010229-9941 --log_dir /tmp/nospam-20210609010229-9941 start --dry-run
error_spam_test.go:168: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20210609010229-9941 --log_dir /tmp/nospam-20210609010229-9941 start --dry-run
error_spam_test.go:168: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20210609010229-9941 --log_dir /tmp/nospam-20210609010229-9941 start --dry-run
error_spam_test.go:168: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20210609010229-9941 --log_dir /tmp/nospam-20210609010229-9941 start --dry-run
error_spam_test.go:168: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20210609010229-9941 --log_dir /tmp/nospam-20210609010229-9941 start --dry-run
error_spam_test.go:168: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20210609010229-9941 --log_dir /tmp/nospam-20210609010229-9941 start --dry-run
error_spam_test.go:168: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20210609010229-9941 --log_dir /tmp/nospam-20210609010229-9941 start --dry-run
error_spam_test.go:168: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20210609010229-9941 --log_dir /tmp/nospam-20210609010229-9941 start --dry-run
error_spam_test.go:168: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20210609010229-9941 --log_dir /tmp/nospam-20210609010229-9941 start --dry-run
error_spam_test.go:168: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20210609010229-9941 --log_dir /tmp/nospam-20210609010229-9941 start --dry-run
error_spam_test.go:168: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20210609010229-9941 --log_dir /tmp/nospam-20210609010229-9941 start --dry-run
error_spam_test.go:168: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20210609010229-9941 --log_dir /tmp/nospam-20210609010229-9941 start --dry-run
error_spam_test.go:168: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20210609010229-9941 --log_dir /tmp/nospam-20210609010229-9941 start --dry-run
error_spam_test.go:168: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20210609010229-9941 --log_dir /tmp/nospam-20210609010229-9941 start --dry-run
error_spam_test.go:168: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20210609010229-9941 --log_dir /tmp/nospam-20210609010229-9941 start --dry-run
error_spam_test.go:168: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20210609010229-9941 --log_dir /tmp/nospam-20210609010229-9941 start --dry-run
error_spam_test.go:168: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20210609010229-9941 --log_dir /tmp/nospam-20210609010229-9941 start --dry-run
error_spam_test.go:168: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20210609010229-9941 --log_dir /tmp/nospam-20210609010229-9941 start --dry-run
error_spam_test.go:168: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20210609010229-9941 --log_dir /tmp/nospam-20210609010229-9941 start --dry-run
error_spam_test.go:168: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20210609010229-9941 --log_dir /tmp/nospam-20210609010229-9941 start --dry-run
error_spam_test.go:168: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20210609010229-9941 --log_dir /tmp/nospam-20210609010229-9941 start --dry-run
error_spam_test.go:168: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20210609010229-9941 --log_dir /tmp/nospam-20210609010229-9941 start --dry-run
error_spam_test.go:168: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20210609010229-9941 --log_dir /tmp/nospam-20210609010229-9941 start --dry-run
error_spam_test.go:168: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20210609010229-9941 --log_dir /tmp/nospam-20210609010229-9941 start --dry-run
error_spam_test.go:168: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20210609010229-9941 --log_dir /tmp/nospam-20210609010229-9941 start --dry-run
error_spam_test.go:168: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20210609010229-9941 --log_dir /tmp/nospam-20210609010229-9941 start --dry-run
error_spam_test.go:168: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20210609010229-9941 --log_dir /tmp/nospam-20210609010229-9941 start --dry-run
error_spam_test.go:168: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20210609010229-9941 --log_dir /tmp/nospam-20210609010229-9941 start --dry-run
error_spam_test.go:168: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20210609010229-9941 --log_dir /tmp/nospam-20210609010229-9941 start --dry-run
error_spam_test.go:168: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20210609010229-9941 --log_dir /tmp/nospam-20210609010229-9941 start --dry-run
error_spam_test.go:168: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20210609010229-9941 --log_dir /tmp/nospam-20210609010229-9941 start --dry-run
error_spam_test.go:168: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20210609010229-9941 --log_dir /tmp/nospam-20210609010229-9941 start --dry-run
error_spam_test.go:168: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20210609010229-9941 --log_dir /tmp/nospam-20210609010229-9941 start --dry-run
error_spam_test.go:168: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20210609010229-9941 --log_dir /tmp/nospam-20210609010229-9941 start --dry-run
error_spam_test.go:168: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20210609010229-9941 --log_dir /tmp/nospam-20210609010229-9941 start --dry-run
error_spam_test.go:168: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20210609010229-9941 --log_dir /tmp/nospam-20210609010229-9941 start --dry-run
error_spam_test.go:168: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20210609010229-9941 --log_dir /tmp/nospam-20210609010229-9941 start --dry-run
error_spam_test.go:168: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20210609010229-9941 --log_dir /tmp/nospam-20210609010229-9941 start --dry-run
error_spam_test.go:168: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20210609010229-9941 --log_dir /tmp/nospam-20210609010229-9941 start --dry-run
--- PASS: TestErrorSpam/start (59.06s)

                                                
                                    
x
+
TestErrorSpam/status (31s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:210: Cleaning up 0 logfile(s) ...
error_spam_test.go:168: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20210609010229-9941 --log_dir /tmp/nospam-20210609010229-9941 status
error_spam_test.go:168: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20210609010229-9941 --log_dir /tmp/nospam-20210609010229-9941 status
error_spam_test.go:168: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20210609010229-9941 --log_dir /tmp/nospam-20210609010229-9941 status
error_spam_test.go:168: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20210609010229-9941 --log_dir /tmp/nospam-20210609010229-9941 status
error_spam_test.go:168: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20210609010229-9941 --log_dir /tmp/nospam-20210609010229-9941 status
error_spam_test.go:168: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20210609010229-9941 --log_dir /tmp/nospam-20210609010229-9941 status
error_spam_test.go:168: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20210609010229-9941 --log_dir /tmp/nospam-20210609010229-9941 status
error_spam_test.go:168: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20210609010229-9941 --log_dir /tmp/nospam-20210609010229-9941 status
error_spam_test.go:168: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20210609010229-9941 --log_dir /tmp/nospam-20210609010229-9941 status
error_spam_test.go:168: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20210609010229-9941 --log_dir /tmp/nospam-20210609010229-9941 status
error_spam_test.go:168: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20210609010229-9941 --log_dir /tmp/nospam-20210609010229-9941 status
error_spam_test.go:168: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20210609010229-9941 --log_dir /tmp/nospam-20210609010229-9941 status
error_spam_test.go:168: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20210609010229-9941 --log_dir /tmp/nospam-20210609010229-9941 status
error_spam_test.go:168: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20210609010229-9941 --log_dir /tmp/nospam-20210609010229-9941 status
error_spam_test.go:168: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20210609010229-9941 --log_dir /tmp/nospam-20210609010229-9941 status
error_spam_test.go:168: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20210609010229-9941 --log_dir /tmp/nospam-20210609010229-9941 status
error_spam_test.go:168: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20210609010229-9941 --log_dir /tmp/nospam-20210609010229-9941 status
error_spam_test.go:168: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20210609010229-9941 --log_dir /tmp/nospam-20210609010229-9941 status
error_spam_test.go:168: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20210609010229-9941 --log_dir /tmp/nospam-20210609010229-9941 status
error_spam_test.go:168: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20210609010229-9941 --log_dir /tmp/nospam-20210609010229-9941 status
error_spam_test.go:168: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20210609010229-9941 --log_dir /tmp/nospam-20210609010229-9941 status
error_spam_test.go:168: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20210609010229-9941 --log_dir /tmp/nospam-20210609010229-9941 status
error_spam_test.go:168: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20210609010229-9941 --log_dir /tmp/nospam-20210609010229-9941 status
error_spam_test.go:168: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20210609010229-9941 --log_dir /tmp/nospam-20210609010229-9941 status
error_spam_test.go:168: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20210609010229-9941 --log_dir /tmp/nospam-20210609010229-9941 status
error_spam_test.go:168: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20210609010229-9941 --log_dir /tmp/nospam-20210609010229-9941 status
error_spam_test.go:168: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20210609010229-9941 --log_dir /tmp/nospam-20210609010229-9941 status
error_spam_test.go:168: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20210609010229-9941 --log_dir /tmp/nospam-20210609010229-9941 status
error_spam_test.go:168: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20210609010229-9941 --log_dir /tmp/nospam-20210609010229-9941 status
error_spam_test.go:168: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20210609010229-9941 --log_dir /tmp/nospam-20210609010229-9941 status
error_spam_test.go:168: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20210609010229-9941 --log_dir /tmp/nospam-20210609010229-9941 status
error_spam_test.go:168: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20210609010229-9941 --log_dir /tmp/nospam-20210609010229-9941 status
error_spam_test.go:168: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20210609010229-9941 --log_dir /tmp/nospam-20210609010229-9941 status
error_spam_test.go:168: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20210609010229-9941 --log_dir /tmp/nospam-20210609010229-9941 status
error_spam_test.go:168: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20210609010229-9941 --log_dir /tmp/nospam-20210609010229-9941 status
error_spam_test.go:168: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20210609010229-9941 --log_dir /tmp/nospam-20210609010229-9941 status
error_spam_test.go:168: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20210609010229-9941 --log_dir /tmp/nospam-20210609010229-9941 status
error_spam_test.go:168: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20210609010229-9941 --log_dir /tmp/nospam-20210609010229-9941 status
error_spam_test.go:168: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20210609010229-9941 --log_dir /tmp/nospam-20210609010229-9941 status
error_spam_test.go:168: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20210609010229-9941 --log_dir /tmp/nospam-20210609010229-9941 status
error_spam_test.go:168: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20210609010229-9941 --log_dir /tmp/nospam-20210609010229-9941 status
error_spam_test.go:168: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20210609010229-9941 --log_dir /tmp/nospam-20210609010229-9941 status
error_spam_test.go:168: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20210609010229-9941 --log_dir /tmp/nospam-20210609010229-9941 status
error_spam_test.go:168: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20210609010229-9941 --log_dir /tmp/nospam-20210609010229-9941 status
error_spam_test.go:168: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20210609010229-9941 --log_dir /tmp/nospam-20210609010229-9941 status
error_spam_test.go:168: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20210609010229-9941 --log_dir /tmp/nospam-20210609010229-9941 status
error_spam_test.go:168: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20210609010229-9941 --log_dir /tmp/nospam-20210609010229-9941 status
error_spam_test.go:168: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20210609010229-9941 --log_dir /tmp/nospam-20210609010229-9941 status
error_spam_test.go:168: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20210609010229-9941 --log_dir /tmp/nospam-20210609010229-9941 status
error_spam_test.go:168: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20210609010229-9941 --log_dir /tmp/nospam-20210609010229-9941 status
error_spam_test.go:168: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20210609010229-9941 --log_dir /tmp/nospam-20210609010229-9941 status
error_spam_test.go:168: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20210609010229-9941 --log_dir /tmp/nospam-20210609010229-9941 status
error_spam_test.go:168: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20210609010229-9941 --log_dir /tmp/nospam-20210609010229-9941 status
error_spam_test.go:168: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20210609010229-9941 --log_dir /tmp/nospam-20210609010229-9941 status
error_spam_test.go:168: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20210609010229-9941 --log_dir /tmp/nospam-20210609010229-9941 status
error_spam_test.go:168: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20210609010229-9941 --log_dir /tmp/nospam-20210609010229-9941 status
error_spam_test.go:168: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20210609010229-9941 --log_dir /tmp/nospam-20210609010229-9941 status
error_spam_test.go:168: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20210609010229-9941 --log_dir /tmp/nospam-20210609010229-9941 status
error_spam_test.go:168: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20210609010229-9941 --log_dir /tmp/nospam-20210609010229-9941 status
error_spam_test.go:168: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20210609010229-9941 --log_dir /tmp/nospam-20210609010229-9941 status
error_spam_test.go:168: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20210609010229-9941 --log_dir /tmp/nospam-20210609010229-9941 status
error_spam_test.go:168: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20210609010229-9941 --log_dir /tmp/nospam-20210609010229-9941 status
error_spam_test.go:168: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20210609010229-9941 --log_dir /tmp/nospam-20210609010229-9941 status
error_spam_test.go:168: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20210609010229-9941 --log_dir /tmp/nospam-20210609010229-9941 status
error_spam_test.go:168: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20210609010229-9941 --log_dir /tmp/nospam-20210609010229-9941 status
error_spam_test.go:168: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20210609010229-9941 --log_dir /tmp/nospam-20210609010229-9941 status
error_spam_test.go:168: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20210609010229-9941 --log_dir /tmp/nospam-20210609010229-9941 status
error_spam_test.go:168: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20210609010229-9941 --log_dir /tmp/nospam-20210609010229-9941 status
error_spam_test.go:168: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20210609010229-9941 --log_dir /tmp/nospam-20210609010229-9941 status
error_spam_test.go:168: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20210609010229-9941 --log_dir /tmp/nospam-20210609010229-9941 status
error_spam_test.go:168: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20210609010229-9941 --log_dir /tmp/nospam-20210609010229-9941 status
error_spam_test.go:168: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20210609010229-9941 --log_dir /tmp/nospam-20210609010229-9941 status
error_spam_test.go:168: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20210609010229-9941 --log_dir /tmp/nospam-20210609010229-9941 status
error_spam_test.go:168: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20210609010229-9941 --log_dir /tmp/nospam-20210609010229-9941 status
error_spam_test.go:168: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20210609010229-9941 --log_dir /tmp/nospam-20210609010229-9941 status
error_spam_test.go:168: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20210609010229-9941 --log_dir /tmp/nospam-20210609010229-9941 status
error_spam_test.go:168: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20210609010229-9941 --log_dir /tmp/nospam-20210609010229-9941 status
error_spam_test.go:168: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20210609010229-9941 --log_dir /tmp/nospam-20210609010229-9941 status
error_spam_test.go:168: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20210609010229-9941 --log_dir /tmp/nospam-20210609010229-9941 status
error_spam_test.go:168: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20210609010229-9941 --log_dir /tmp/nospam-20210609010229-9941 status
error_spam_test.go:168: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20210609010229-9941 --log_dir /tmp/nospam-20210609010229-9941 status
error_spam_test.go:168: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20210609010229-9941 --log_dir /tmp/nospam-20210609010229-9941 status
error_spam_test.go:168: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20210609010229-9941 --log_dir /tmp/nospam-20210609010229-9941 status
error_spam_test.go:168: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20210609010229-9941 --log_dir /tmp/nospam-20210609010229-9941 status
error_spam_test.go:168: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20210609010229-9941 --log_dir /tmp/nospam-20210609010229-9941 status
error_spam_test.go:168: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20210609010229-9941 --log_dir /tmp/nospam-20210609010229-9941 status
error_spam_test.go:168: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20210609010229-9941 --log_dir /tmp/nospam-20210609010229-9941 status
error_spam_test.go:168: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20210609010229-9941 --log_dir /tmp/nospam-20210609010229-9941 status
error_spam_test.go:168: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20210609010229-9941 --log_dir /tmp/nospam-20210609010229-9941 status
error_spam_test.go:168: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20210609010229-9941 --log_dir /tmp/nospam-20210609010229-9941 status
error_spam_test.go:168: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20210609010229-9941 --log_dir /tmp/nospam-20210609010229-9941 status
error_spam_test.go:168: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20210609010229-9941 --log_dir /tmp/nospam-20210609010229-9941 status
error_spam_test.go:168: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20210609010229-9941 --log_dir /tmp/nospam-20210609010229-9941 status
error_spam_test.go:168: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20210609010229-9941 --log_dir /tmp/nospam-20210609010229-9941 status
error_spam_test.go:168: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20210609010229-9941 --log_dir /tmp/nospam-20210609010229-9941 status
error_spam_test.go:168: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20210609010229-9941 --log_dir /tmp/nospam-20210609010229-9941 status
error_spam_test.go:168: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20210609010229-9941 --log_dir /tmp/nospam-20210609010229-9941 status
error_spam_test.go:168: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20210609010229-9941 --log_dir /tmp/nospam-20210609010229-9941 status
error_spam_test.go:168: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20210609010229-9941 --log_dir /tmp/nospam-20210609010229-9941 status
error_spam_test.go:168: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20210609010229-9941 --log_dir /tmp/nospam-20210609010229-9941 status
--- PASS: TestErrorSpam/status (31.00s)

                                                
                                    
x
+
TestErrorSpam/pause (1.89s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:210: Cleaning up 0 logfile(s) ...
error_spam_test.go:168: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20210609010229-9941 --log_dir /tmp/nospam-20210609010229-9941 pause
error_spam_test.go:168: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20210609010229-9941 --log_dir /tmp/nospam-20210609010229-9941 pause
error_spam_test.go:168: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20210609010229-9941 --log_dir /tmp/nospam-20210609010229-9941 pause
error_spam_test.go:168: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20210609010229-9941 --log_dir /tmp/nospam-20210609010229-9941 pause
error_spam_test.go:168: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20210609010229-9941 --log_dir /tmp/nospam-20210609010229-9941 pause
--- PASS: TestErrorSpam/pause (1.89s)

                                                
                                    
x
+
TestErrorSpam/unpause (0.52s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:210: Cleaning up 0 logfile(s) ...
error_spam_test.go:168: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20210609010229-9941 --log_dir /tmp/nospam-20210609010229-9941 unpause
--- PASS: TestErrorSpam/unpause (0.52s)

                                                
                                    
x
+
TestErrorSpam/stop (11.01s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:210: Cleaning up 0 logfile(s) ...
error_spam_test.go:168: (dbg) Run:  out/minikube-linux-amd64 -p nospam-20210609010229-9941 --log_dir /tmp/nospam-20210609010229-9941 stop
error_spam_test.go:168: (dbg) Done: out/minikube-linux-amd64 -p nospam-20210609010229-9941 --log_dir /tmp/nospam-20210609010229-9941 stop: (11.013803489s)
--- PASS: TestErrorSpam/stop (11.01s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1564: local sync path: /home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/files/etc/test/nested/copy/9941/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (121.89s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:542: (dbg) Run:  out/minikube-linux-amd64 start -p functional-20210609010438-9941 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=docker
E0609 01:05:43.573854    9941 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/profiles/addons-20210609005737-9941/client.crt: no such file or directory
E0609 01:05:43.579406    9941 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/profiles/addons-20210609005737-9941/client.crt: no such file or directory
E0609 01:05:43.589639    9941 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/profiles/addons-20210609005737-9941/client.crt: no such file or directory
E0609 01:05:43.609884    9941 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/profiles/addons-20210609005737-9941/client.crt: no such file or directory
E0609 01:05:43.650131    9941 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/profiles/addons-20210609005737-9941/client.crt: no such file or directory
E0609 01:05:43.730395    9941 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/profiles/addons-20210609005737-9941/client.crt: no such file or directory
E0609 01:05:43.890760    9941 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/profiles/addons-20210609005737-9941/client.crt: no such file or directory
E0609 01:05:44.211215    9941 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/profiles/addons-20210609005737-9941/client.crt: no such file or directory
E0609 01:05:44.852094    9941 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/profiles/addons-20210609005737-9941/client.crt: no such file or directory
E0609 01:05:46.132774    9941 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/profiles/addons-20210609005737-9941/client.crt: no such file or directory
E0609 01:05:48.693877    9941 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/profiles/addons-20210609005737-9941/client.crt: no such file or directory
E0609 01:05:53.814459    9941 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/profiles/addons-20210609005737-9941/client.crt: no such file or directory
E0609 01:06:04.054966    9941 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/profiles/addons-20210609005737-9941/client.crt: no such file or directory
E0609 01:06:24.535328    9941 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/profiles/addons-20210609005737-9941/client.crt: no such file or directory
functional_test.go:542: (dbg) Done: out/minikube-linux-amd64 start -p functional-20210609010438-9941 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=docker: (2m1.894270533s)
--- PASS: TestFunctional/serial/StartWithProxy (121.89s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (4.87s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
functional_test.go:586: (dbg) Run:  out/minikube-linux-amd64 start -p functional-20210609010438-9941 --alsologtostderr -v=8
functional_test.go:586: (dbg) Done: out/minikube-linux-amd64 start -p functional-20210609010438-9941 --alsologtostderr -v=8: (4.865201038s)
functional_test.go:590: soft start took 4.865774646s for "functional-20210609010438-9941" cluster.
--- PASS: TestFunctional/serial/SoftStart (4.87s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:606: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.04s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.23s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:619: (dbg) Run:  kubectl --context functional-20210609010438-9941 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.23s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (3.31s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:911: (dbg) Run:  out/minikube-linux-amd64 -p functional-20210609010438-9941 cache add k8s.gcr.io/pause:3.1
functional_test.go:911: (dbg) Run:  out/minikube-linux-amd64 -p functional-20210609010438-9941 cache add k8s.gcr.io/pause:3.3
functional_test.go:911: (dbg) Done: out/minikube-linux-amd64 -p functional-20210609010438-9941 cache add k8s.gcr.io/pause:3.3: (1.387734605s)
functional_test.go:911: (dbg) Run:  out/minikube-linux-amd64 -p functional-20210609010438-9941 cache add k8s.gcr.io/pause:latest
functional_test.go:911: (dbg) Done: out/minikube-linux-amd64 -p functional-20210609010438-9941 cache add k8s.gcr.io/pause:latest: (1.27259428s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (3.31s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.65s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:941: (dbg) Run:  docker build -t minikube-local-cache-test:functional-20210609010438-9941 /tmp/functional-20210609010438-9941045747310
functional_test.go:946: (dbg) Run:  out/minikube-linux-amd64 -p functional-20210609010438-9941 cache add minikube-local-cache-test:functional-20210609010438-9941
functional_test.go:946: (dbg) Done: out/minikube-linux-amd64 -p functional-20210609010438-9941 cache add minikube-local-cache-test:functional-20210609010438-9941: (1.465920966s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.65s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete_k8s.gcr.io/pause:3.3 (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete_k8s.gcr.io/pause:3.3
functional_test.go:953: (dbg) Run:  out/minikube-linux-amd64 cache delete k8s.gcr.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/delete_k8s.gcr.io/pause:3.3 (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:960: (dbg) Run:  out/minikube-linux-amd64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.29s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:973: (dbg) Run:  out/minikube-linux-amd64 -p functional-20210609010438-9941 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.29s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (1.87s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:995: (dbg) Run:  out/minikube-linux-amd64 -p functional-20210609010438-9941 ssh sudo docker rmi k8s.gcr.io/pause:latest
functional_test.go:1001: (dbg) Run:  out/minikube-linux-amd64 -p functional-20210609010438-9941 ssh sudo crictl inspecti k8s.gcr.io/pause:latest
functional_test.go:1001: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-20210609010438-9941 ssh sudo crictl inspecti k8s.gcr.io/pause:latest: exit status 1 (284.994609ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "k8s.gcr.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1006: (dbg) Run:  out/minikube-linux-amd64 -p functional-20210609010438-9941 cache reload
functional_test.go:1011: (dbg) Run:  out/minikube-linux-amd64 -p functional-20210609010438-9941 ssh sudo crictl inspecti k8s.gcr.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (1.87s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.12s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1020: (dbg) Run:  out/minikube-linux-amd64 cache delete k8s.gcr.io/pause:3.1
functional_test.go:1020: (dbg) Run:  out/minikube-linux-amd64 cache delete k8s.gcr.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.12s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.12s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:637: (dbg) Run:  out/minikube-linux-amd64 -p functional-20210609010438-9941 kubectl -- --context functional-20210609010438-9941 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.12s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.12s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:656: (dbg) Run:  out/kubectl --context functional-20210609010438-9941 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.12s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (99.82s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:670: (dbg) Run:  out/minikube-linux-amd64 start -p functional-20210609010438-9941 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
E0609 01:07:05.496322    9941 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/profiles/addons-20210609005737-9941/client.crt: no such file or directory
E0609 01:08:27.417190    9941 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/profiles/addons-20210609005737-9941/client.crt: no such file or directory
functional_test.go:670: (dbg) Done: out/minikube-linux-amd64 start -p functional-20210609010438-9941 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (1m39.815051838s)
functional_test.go:674: restart took 1m39.815234359s for "functional-20210609010438-9941" cluster.
--- PASS: TestFunctional/serial/ExtraConfig (99.82s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:721: (dbg) Run:  kubectl --context functional-20210609010438-9941 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:735: etcd phase: Running
functional_test.go:745: etcd status: Ready
functional_test.go:735: kube-apiserver phase: Running
functional_test.go:745: kube-apiserver status: Ready
functional_test.go:735: kube-controller-manager phase: Running
functional_test.go:745: kube-controller-manager status: Ready
functional_test.go:735: kube-scheduler phase: Running
functional_test.go:745: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.47s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1046: (dbg) Run:  out/minikube-linux-amd64 -p functional-20210609010438-9941 config unset cpus

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1046: (dbg) Run:  out/minikube-linux-amd64 -p functional-20210609010438-9941 config get cpus

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1046: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-20210609010438-9941 config get cpus: exit status 14 (83.68657ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1046: (dbg) Run:  out/minikube-linux-amd64 -p functional-20210609010438-9941 config set cpus 2
functional_test.go:1046: (dbg) Run:  out/minikube-linux-amd64 -p functional-20210609010438-9941 config get cpus

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1046: (dbg) Run:  out/minikube-linux-amd64 -p functional-20210609010438-9941 config unset cpus

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1046: (dbg) Run:  out/minikube-linux-amd64 -p functional-20210609010438-9941 config get cpus
functional_test.go:1046: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-20210609010438-9941 config get cpus: exit status 14 (73.308295ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.47s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (3.52s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:812: (dbg) daemon: [out/minikube-linux-amd64 dashboard --url -p functional-20210609010438-9941 --alsologtostderr -v=1]
2021/06/09 01:08:49 [DEBUG] GET http://127.0.0.1:38843/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
functional_test.go:817: (dbg) stopping [out/minikube-linux-amd64 dashboard --url -p functional-20210609010438-9941 --alsologtostderr -v=1] ...
helpers_test.go:499: unable to kill pid 78364: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (3.52s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.62s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:874: (dbg) Run:  out/minikube-linux-amd64 start -p functional-20210609010438-9941 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=docker
functional_test.go:874: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-20210609010438-9941 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=docker: exit status 23 (264.923574ms)

                                                
                                                
-- stdout --
	* [functional-20210609010438-9941] minikube v1.21.0-beta.0 on Debian 9.13 (kvm/amd64)
	  - KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/kubeconfig
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube
	  - MINIKUBE_LOCATION=11610
	* Using the docker driver based on existing profile
	  - More information: https://docs.docker.com/engine/install/linux-postinstall/#your-kernel-does-not-support-cgroup-swap-limit-capabilities
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0609 01:08:45.613089   77969 out.go:291] Setting OutFile to fd 1 ...
	I0609 01:08:45.613458   77969 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0609 01:08:45.613471   77969 out.go:304] Setting ErrFile to fd 2...
	I0609 01:08:45.613476   77969 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0609 01:08:45.613721   77969 root.go:316] Updating PATH: /home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/bin
	I0609 01:08:45.614093   77969 out.go:298] Setting JSON to false
	I0609 01:08:45.655254   77969 start.go:111] hostinfo: {"hostname":"debian-jenkins-agent-1","uptime":3089,"bootTime":1623197837,"procs":215,"os":"linux","platform":"debian","platformFamily":"debian","platformVersion":"9.13","kernelVersion":"4.9.0-15-amd64","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"c29e0b88-ef83-6765-d2fa-208fdce1af32"}
	I0609 01:08:45.655390   77969 start.go:121] virtualization: kvm guest
	I0609 01:08:45.659683   77969 out.go:170] * [functional-20210609010438-9941] minikube v1.21.0-beta.0 on Debian 9.13 (kvm/amd64)
	I0609 01:08:45.661626   77969 out.go:170]   - KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/kubeconfig
	I0609 01:08:45.663354   77969 out.go:170]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0609 01:08:45.665175   77969 out.go:170]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube
	I0609 01:08:45.666899   77969 out.go:170]   - MINIKUBE_LOCATION=11610
	I0609 01:08:45.667989   77969 driver.go:335] Setting default libvirt URI to qemu:///system
	I0609 01:08:45.716558   77969 docker.go:132] docker version: linux-19.03.15
	I0609 01:08:45.716641   77969 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	I0609 01:08:45.806219   77969 info.go:261] docker info: {ID:LQL6:IQEY:TAE3:NG47:ROZQ:WA5O:XM2B:XDCN:3VXZ:7JF3:4DHA:WN5N Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:133 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:false KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:29 OomKillDisable:true NGoroutines:40 SystemTime:2021-06-09 01:08:45.756336574 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:4.9.0-15-amd64 OperatingSystem:Debian GNU/Linux 9 (stretch) OSType:linux Architecture:x86_64 IndexServerAddr
ess:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33742209024 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:debian-jenkins-agent-1 Labels:[] ExperimentalBuild:false ServerVersion:19.03.15 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:269548fa27e0089a8b8278fc4fc781d7f65a939b Expected:269548fa27e0089a8b8278fc4fc781d7f65a939b} RuncCommit:{ID:ff819c7e9184c13b7c2607fe6c30ae19403a7aff Expected:ff819c7e9184c13b7c2607fe6c30ae19403a7aff} InitCommit:{ID:fec3683 Expected:fec3683} SecurityOptions:[name=seccomp,profile=default] ProductLicense: Warnin
gs:[WARNING: No swap limit support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[] Warnings:<nil>}}
	I0609 01:08:45.806295   77969 docker.go:244] overlay module found
	I0609 01:08:45.809084   77969 out.go:170] * Using the docker driver based on existing profile
	I0609 01:08:45.809108   77969 start.go:279] selected driver: docker
	I0609 01:08:45.809113   77969 start.go:752] validating driver "docker" against &{Name:functional-20210609010438-9941 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.23@sha256:baf6d94b2050bcbecd98994e265cf965a4f4768978620ccf5227a6dcb75ade45 Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.7 ClusterName:functional-20210609010438-9941 Namespace:default APIServerName:minikubeCA APIServer
Names:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.20.7 ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false helm-tiller:false ingress:false ingress-dns:false istio:false istio-provisioner:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:f
alse volumesnapshots:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false}
	I0609 01:08:45.809215   77969 start.go:763] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc:}
	W0609 01:08:45.809250   77969 oci.go:119] Your kernel does not support memory limit capabilities or the cgroup is not mounted.
	W0609 01:08:45.809272   77969 out.go:235] ! Your cgroup does not allow setting memory.
	! Your cgroup does not allow setting memory.
	I0609 01:08:45.810792   77969 out.go:170]   - More information: https://docs.docker.com/engine/install/linux-postinstall/#your-kernel-does-not-support-cgroup-swap-limit-capabilities
	I0609 01:08:45.812871   77969 out.go:170] 
	W0609 01:08:45.812963   77969 out.go:235] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I0609 01:08:45.814586   77969 out.go:170] 

                                                
                                                
** /stderr **
functional_test.go:889: (dbg) Run:  out/minikube-linux-amd64 start -p functional-20210609010438-9941 --dry-run --alsologtostderr -v=1 --driver=docker  --container-runtime=docker
--- PASS: TestFunctional/parallel/DryRun (0.62s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (1s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:764: (dbg) Run:  out/minikube-linux-amd64 -p functional-20210609010438-9941 status

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:770: (dbg) Run:  out/minikube-linux-amd64 -p functional-20210609010438-9941 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:781: (dbg) Run:  out/minikube-linux-amd64 -p functional-20210609010438-9941 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (1.00s)

                                                
                                    
x
+
TestFunctional/parallel/LogsCmd (2.8s)

                                                
                                                
=== RUN   TestFunctional/parallel/LogsCmd
=== PAUSE TestFunctional/parallel/LogsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/LogsCmd
functional_test.go:1082: (dbg) Run:  out/minikube-linux-amd64 -p functional-20210609010438-9941 logs

                                                
                                                
=== CONT  TestFunctional/parallel/LogsCmd
functional_test.go:1082: (dbg) Done: out/minikube-linux-amd64 -p functional-20210609010438-9941 logs: (2.800478664s)
--- PASS: TestFunctional/parallel/LogsCmd (2.80s)

                                                
                                    
x
+
TestFunctional/parallel/LogsFileCmd (1.67s)

                                                
                                                
=== RUN   TestFunctional/parallel/LogsFileCmd
=== PAUSE TestFunctional/parallel/LogsFileCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/LogsFileCmd
functional_test.go:1098: (dbg) Run:  out/minikube-linux-amd64 -p functional-20210609010438-9941 logs --file /tmp/functional-20210609010438-9941619424501/logs.txt

                                                
                                                
=== CONT  TestFunctional/parallel/LogsFileCmd
functional_test.go:1098: (dbg) Done: out/minikube-linux-amd64 -p functional-20210609010438-9941 logs --file /tmp/functional-20210609010438-9941619424501/logs.txt: (1.667658225s)
--- PASS: TestFunctional/parallel/LogsFileCmd (1.67s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd (10.79s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd
=== PAUSE TestFunctional/parallel/MountCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MountCmd
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-20210609010438-9941 /tmp/mounttest027320784:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1623200923126024273" to /tmp/mounttest027320784/created-by-test
functional_test_mount_test.go:107: wrote "test-1623200923126024273" to /tmp/mounttest027320784/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1623200923126024273" to /tmp/mounttest027320784/test-1623200923126024273
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-20210609010438-9941 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-20210609010438-9941 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (294.79501ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-20210609010438-9941 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-amd64 -p functional-20210609010438-9941 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Jun  9 01:08 created-by-test
-rw-r--r-- 1 docker docker 24 Jun  9 01:08 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Jun  9 01:08 test-1623200923126024273
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-amd64 -p functional-20210609010438-9941 ssh cat /mount-9p/test-1623200923126024273

                                                
                                                
=== CONT  TestFunctional/parallel/MountCmd
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-20210609010438-9941 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:335: "busybox-mount" [e51dcd0b-b6fc-4737-9630-94de4c9a919d] Pending

                                                
                                                
=== CONT  TestFunctional/parallel/MountCmd
helpers_test.go:335: "busybox-mount" [e51dcd0b-b6fc-4737-9630-94de4c9a919d] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])

                                                
                                                
=== CONT  TestFunctional/parallel/MountCmd
helpers_test.go:335: "busybox-mount" [e51dcd0b-b6fc-4737-9630-94de4c9a919d] Succeeded: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd: integration-test=busybox-mount healthy within 8.005237296s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-20210609010438-9941 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-20210609010438-9941 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-20210609010438-9941 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-amd64 -p functional-20210609010438-9941 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-20210609010438-9941 /tmp/mounttest027320784:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd (10.79s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd (14.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd
=== PAUSE TestFunctional/parallel/ServiceCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmd
functional_test.go:1274: (dbg) Run:  kubectl --context functional-20210609010438-9941 create deployment hello-node --image=k8s.gcr.io/echoserver:1.8
functional_test.go:1280: (dbg) Run:  kubectl --context functional-20210609010438-9941 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1285: (dbg) TestFunctional/parallel/ServiceCmd: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:335: "hello-node-6cbfcd7cbc-zx5bl" [774dbd92-76c3-46fa-b48d-d288f43b0a78] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmd
helpers_test.go:335: "hello-node-6cbfcd7cbc-zx5bl" [774dbd92-76c3-46fa-b48d-d288f43b0a78] Running

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmd
functional_test.go:1285: (dbg) TestFunctional/parallel/ServiceCmd: app=hello-node healthy within 12.006522628s
functional_test.go:1289: (dbg) Run:  out/minikube-linux-amd64 -p functional-20210609010438-9941 service list

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmd
functional_test.go:1302: (dbg) Run:  out/minikube-linux-amd64 -p functional-20210609010438-9941 service --namespace=default --https --url hello-node

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmd
functional_test.go:1311: found endpoint: https://192.168.49.2:32718
functional_test.go:1322: (dbg) Run:  out/minikube-linux-amd64 -p functional-20210609010438-9941 service hello-node --url --format={{.IP}}
functional_test.go:1331: (dbg) Run:  out/minikube-linux-amd64 -p functional-20210609010438-9941 service hello-node --url
functional_test.go:1337: found endpoint for hello-node: http://192.168.49.2:32718
functional_test.go:1348: Attempting to fetch http://192.168.49.2:32718 ...
functional_test.go:1367: http://192.168.49.2:32718: success! body:

                                                
                                                

                                                
                                                
Hostname: hello-node-6cbfcd7cbc-zx5bl

                                                
                                                
Pod Information:
	-no pod information available-

                                                
                                                
Server values:
	server_version=nginx: 1.13.3 - lua: 10008

                                                
                                                
Request Information:
	client_address=172.17.0.1
	method=GET
	real path=/
	query=
	request_version=1.1
	request_uri=http://192.168.49.2:8080/

                                                
                                                
Request Headers:
	accept-encoding=gzip
	host=192.168.49.2:32718
	user-agent=Go-http-client/1.1

                                                
                                                
Request Body:
	-no body in request-

                                                
                                                
--- PASS: TestFunctional/parallel/ServiceCmd (14.29s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1382: (dbg) Run:  out/minikube-linux-amd64 -p functional-20210609010438-9941 addons list
functional_test.go:1393: (dbg) Run:  out/minikube-linux-amd64 -p functional-20210609010438-9941 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.20s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (31.38s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:335: "storage-provisioner" [23ce9c8d-bfb0-4a01-b1f4-c01cd387cbb7] Running

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 5.007307025s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-20210609010438-9941 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-20210609010438-9941 apply -f testdata/storage-provisioner/pvc.yaml

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-20210609010438-9941 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-20210609010438-9941 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:335: "sp-pod" [f8ed1566-f03a-45fa-8237-c04a918fea71] Pending
helpers_test.go:335: "sp-pod" [f8ed1566-f03a-45fa-8237-c04a918fea71] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
helpers_test.go:335: "sp-pod" [f8ed1566-f03a-45fa-8237-c04a918fea71] Running

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 13.037511166s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context functional-20210609010438-9941 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-20210609010438-9941 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:106: (dbg) Done: kubectl --context functional-20210609010438-9941 delete -f testdata/storage-provisioner/pod.yaml: (2.092744491s)
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-20210609010438-9941 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:335: "sp-pod" [94c0a374-c45a-42c0-b60b-ea52f9dbf6f4] Pending
helpers_test.go:335: "sp-pod" [94c0a374-c45a-42c0-b60b-ea52f9dbf6f4] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
helpers_test.go:335: "sp-pod" [94c0a374-c45a-42c0-b60b-ea52f9dbf6f4] Running

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 10.006804453s
functional_test_pvc_test.go:114: (dbg) Run:  kubectl --context functional-20210609010438-9941 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (31.38s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.62s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1415: (dbg) Run:  out/minikube-linux-amd64 -p functional-20210609010438-9941 ssh "echo hello"

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1432: (dbg) Run:  out/minikube-linux-amd64 -p functional-20210609010438-9941 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.62s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (0.63s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
functional_test.go:1467: (dbg) Run:  out/minikube-linux-amd64 -p functional-20210609010438-9941 cp testdata/cp-test.txt /home/docker/cp-test.txt

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
functional_test.go:1481: (dbg) Run:  out/minikube-linux-amd64 -p functional-20210609010438-9941 ssh "sudo cat /home/docker/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (0.63s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (20.46s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1516: (dbg) Run:  kubectl --context functional-20210609010438-9941 replace --force -f testdata/mysql.yaml
functional_test.go:1521: (dbg) TestFunctional/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...
helpers_test.go:335: "mysql-9bbbc5bbb-h4gxd" [2485235d-f330-4e07-8320-37da3da63960] Pending

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
helpers_test.go:335: "mysql-9bbbc5bbb-h4gxd" [2485235d-f330-4e07-8320-37da3da63960] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
helpers_test.go:335: "mysql-9bbbc5bbb-h4gxd" [2485235d-f330-4e07-8320-37da3da63960] Running

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1521: (dbg) TestFunctional/parallel/MySQL: app=mysql healthy within 16.009365237s
functional_test.go:1528: (dbg) Run:  kubectl --context functional-20210609010438-9941 exec mysql-9bbbc5bbb-h4gxd -- mysql -ppassword -e "show databases;"
functional_test.go:1528: (dbg) Non-zero exit: kubectl --context functional-20210609010438-9941 exec mysql-9bbbc5bbb-h4gxd -- mysql -ppassword -e "show databases;": exit status 1 (158.454754ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1528: (dbg) Run:  kubectl --context functional-20210609010438-9941 exec mysql-9bbbc5bbb-h4gxd -- mysql -ppassword -e "show databases;"
functional_test.go:1528: (dbg) Non-zero exit: kubectl --context functional-20210609010438-9941 exec mysql-9bbbc5bbb-h4gxd -- mysql -ppassword -e "show databases;": exit status 1 (133.70362ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1528: (dbg) Run:  kubectl --context functional-20210609010438-9941 exec mysql-9bbbc5bbb-h4gxd -- mysql -ppassword -e "show databases;"
functional_test.go:1528: (dbg) Non-zero exit: kubectl --context functional-20210609010438-9941 exec mysql-9bbbc5bbb-h4gxd -- mysql -ppassword -e "show databases;": exit status 1 (225.693293ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1528: (dbg) Run:  kubectl --context functional-20210609010438-9941 exec mysql-9bbbc5bbb-h4gxd -- mysql -ppassword -e "show databases;"
--- PASS: TestFunctional/parallel/MySQL (20.46s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1612: Checking for existence of /etc/test/nested/copy/9941/hosts within VM
functional_test.go:1613: (dbg) Run:  out/minikube-linux-amd64 -p functional-20210609010438-9941 ssh "sudo cat /etc/test/nested/copy/9941/hosts"

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1618: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.28s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (0.86s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1653: Checking for existence of /etc/ssl/certs/9941.pem within VM
functional_test.go:1654: (dbg) Run:  out/minikube-linux-amd64 -p functional-20210609010438-9941 ssh "sudo cat /etc/ssl/certs/9941.pem"
functional_test.go:1653: Checking for existence of /usr/share/ca-certificates/9941.pem within VM
functional_test.go:1654: (dbg) Run:  out/minikube-linux-amd64 -p functional-20210609010438-9941 ssh "sudo cat /usr/share/ca-certificates/9941.pem"
functional_test.go:1653: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1654: (dbg) Run:  out/minikube-linux-amd64 -p functional-20210609010438-9941 ssh "sudo cat /etc/ssl/certs/51391683.0"
--- PASS: TestFunctional/parallel/CertSync (0.86s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv (1.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv
=== PAUSE TestFunctional/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv
functional_test.go:425: (dbg) Run:  /bin/bash -c "eval $(out/minikube-linux-amd64 -p functional-20210609010438-9941 docker-env) && out/minikube-linux-amd64 status -p functional-20210609010438-9941"

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv
functional_test.go:448: (dbg) Run:  /bin/bash -c "eval $(out/minikube-linux-amd64 -p functional-20210609010438-9941 docker-env) && docker images"
--- PASS: TestFunctional/parallel/DockerEnv (1.31s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:198: (dbg) Run:  kubectl --context functional-20210609010438-9941 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/LoadImage (2.44s)

                                                
                                                
=== RUN   TestFunctional/parallel/LoadImage
=== PAUSE TestFunctional/parallel/LoadImage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/LoadImage
functional_test.go:221: (dbg) Run:  docker pull busybox:1.33

                                                
                                                
=== CONT  TestFunctional/parallel/LoadImage
functional_test.go:221: (dbg) Done: docker pull busybox:1.33: (1.294423442s)
functional_test.go:228: (dbg) Run:  docker tag busybox:1.33 docker.io/library/busybox:load-functional-20210609010438-9941
functional_test.go:234: (dbg) Run:  out/minikube-linux-amd64 -p functional-20210609010438-9941 image load docker.io/library/busybox:load-functional-20210609010438-9941

                                                
                                                
=== CONT  TestFunctional/parallel/LoadImage
functional_test.go:304: (dbg) Run:  out/minikube-linux-amd64 ssh -p functional-20210609010438-9941 -- docker image inspect docker.io/library/busybox:load-functional-20210609010438-9941
--- PASS: TestFunctional/parallel/LoadImage (2.44s)

                                                
                                    
x
+
TestFunctional/parallel/RemoveImage (2.73s)

                                                
                                                
=== RUN   TestFunctional/parallel/RemoveImage
=== PAUSE TestFunctional/parallel/RemoveImage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/RemoveImage
functional_test.go:262: (dbg) Run:  docker pull busybox:1.32

                                                
                                                
=== CONT  TestFunctional/parallel/RemoveImage
functional_test.go:262: (dbg) Done: docker pull busybox:1.32: (1.304525657s)
functional_test.go:269: (dbg) Run:  docker tag busybox:1.32 docker.io/library/busybox:remove-functional-20210609010438-9941
functional_test.go:275: (dbg) Run:  out/minikube-linux-amd64 -p functional-20210609010438-9941 image load docker.io/library/busybox:remove-functional-20210609010438-9941

                                                
                                                
=== CONT  TestFunctional/parallel/RemoveImage
functional_test.go:281: (dbg) Run:  out/minikube-linux-amd64 -p functional-20210609010438-9941 image rm docker.io/library/busybox:remove-functional-20210609010438-9941
functional_test.go:318: (dbg) Run:  out/minikube-linux-amd64 ssh -p functional-20210609010438-9941 -- docker images
--- PASS: TestFunctional/parallel/RemoveImage (2.73s)

                                                
                                    
x
+
TestFunctional/parallel/BuildImage (4.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/BuildImage
=== PAUSE TestFunctional/parallel/BuildImage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/BuildImage
functional_test.go:342: (dbg) Run:  out/minikube-linux-amd64 -p functional-20210609010438-9941 image build -t localhost/my-image:functional-20210609010438-9941 testdata/build

                                                
                                                
=== CONT  TestFunctional/parallel/BuildImage
functional_test.go:342: (dbg) Done: out/minikube-linux-amd64 -p functional-20210609010438-9941 image build -t localhost/my-image:functional-20210609010438-9941 testdata/build: (3.738724822s)
functional_test.go:347: (dbg) Stdout: out/minikube-linux-amd64 -p functional-20210609010438-9941 image build -t localhost/my-image:functional-20210609010438-9941 testdata/build:
Sending build context to Docker daemon  3.072kB

Step 1/3 : FROM busybox
latest: Pulling from library/busybox
b71f96345d44: Pulling fs layer
b71f96345d44: Verifying Checksum
b71f96345d44: Download complete
b71f96345d44: Pull complete
Digest: sha256:930490f97e5b921535c153e0e7110d251134cc4b72bbb8133c6a5065cc68580d
Status: Downloaded newer image for busybox:latest
---> 69593048aa3a
Step 2/3 : RUN true
---> Running in 073d7b6627ef
Removing intermediate container 073d7b6627ef
---> f1d1055ee990
Step 3/3 : ADD content.txt /
---> 66c1988c83a0
Successfully built 66c1988c83a0
Successfully tagged localhost/my-image:functional-20210609010438-9941
functional_test.go:304: (dbg) Run:  out/minikube-linux-amd64 ssh -p functional-20210609010438-9941 -- docker image inspect localhost/my-image:functional-20210609010438-9941
--- PASS: TestFunctional/parallel/BuildImage (4.11s)

                                                
                                    
x
+
TestFunctional/parallel/ListImages (0.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/ListImages
=== PAUSE TestFunctional/parallel/ListImages

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ListImages
functional_test.go:386: (dbg) Run:  out/minikube-linux-amd64 -p functional-20210609010438-9941 image ls

                                                
                                                
=== CONT  TestFunctional/parallel/ListImages
functional_test.go:391: (dbg) Stdout: out/minikube-linux-amd64 -p functional-20210609010438-9941 image ls:
k8s.gcr.io/pause:latest
k8s.gcr.io/pause:3.3
k8s.gcr.io/pause:3.2
k8s.gcr.io/pause:3.1
k8s.gcr.io/kube-scheduler:v1.20.7
k8s.gcr.io/kube-proxy:v1.20.7
k8s.gcr.io/kube-controller-manager:v1.20.7
k8s.gcr.io/kube-apiserver:v1.20.7
k8s.gcr.io/etcd:3.4.13-0
k8s.gcr.io/coredns:1.7.0
gcr.io/k8s-minikube/storage-provisioner:v5
docker.io/library/minikube-local-cache-test:functional-20210609010438-9941
docker.io/kubernetesui/metrics-scraper:v1.0.4
docker.io/kubernetesui/dashboard:v2.1.0
--- PASS: TestFunctional/parallel/ListImages (0.33s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.35s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:1681: (dbg) Run:  out/minikube-linux-amd64 -p functional-20210609010438-9941 ssh "sudo systemctl is-active crio"

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:1681: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-20210609010438-9941 ssh "sudo systemctl is-active crio": exit status 1 (351.491645ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.35s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.49s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1119: (dbg) Run:  out/minikube-linux-amd64 profile lis

                                                
                                                
=== CONT  TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1123: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.49s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.42s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1157: (dbg) Run:  out/minikube-linux-amd64 profile list

                                                
                                                
=== CONT  TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1162: Took "352.236944ms" to run "out/minikube-linux-amd64 profile list"
functional_test.go:1171: (dbg) Run:  out/minikube-linux-amd64 profile list -l
functional_test.go:1176: Took "70.615844ms" to run "out/minikube-linux-amd64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.42s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.38s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1207: (dbg) Run:  out/minikube-linux-amd64 profile list -o json

                                                
                                                
=== CONT  TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1212: Took "318.752582ms" to run "out/minikube-linux-amd64 profile list -o json"
functional_test.go:1220: (dbg) Run:  out/minikube-linux-amd64 profile list -o json --light
functional_test.go:1225: Took "65.307938ms" to run "out/minikube-linux-amd64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.38s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:126: (dbg) daemon: [out/minikube-linux-amd64 -p functional-20210609010438-9941 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:1773: (dbg) Run:  out/minikube-linux-amd64 -p functional-20210609010438-9941 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:1773: (dbg) Run:  out/minikube-linux-amd64 -p functional-20210609010438-9941 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:1773: (dbg) Run:  out/minikube-linux-amd64 -p functional-20210609010438-9941 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:164: (dbg) Run:  kubectl --context functional-20210609010438-9941 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.10s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:229: tunnel at http://10.105.14.39 is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:364: (dbg) stopping [out/minikube-linux-amd64 -p functional-20210609010438-9941 tunnel --alsologtostderr] ...
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                    
x
+
TestFunctional/delete_busybox_image (0.08s)

                                                
                                                
=== RUN   TestFunctional/delete_busybox_image
functional_test.go:165: (dbg) Run:  docker rmi -f docker.io/library/busybox:load-functional-20210609010438-9941
functional_test.go:170: (dbg) Run:  docker rmi -f docker.io/library/busybox:remove-functional-20210609010438-9941
--- PASS: TestFunctional/delete_busybox_image (0.08s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.04s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:177: (dbg) Run:  docker rmi -f localhost/my-image:functional-20210609010438-9941
--- PASS: TestFunctional/delete_my-image_image (0.04s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.05s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:185: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-20210609010438-9941
--- PASS: TestFunctional/delete_minikube_cached_images (0.05s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.34s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:146: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-error-20210609011127-9941 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:146: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p json-output-error-20210609011127-9941 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (97.326809ms)

                                                
                                                
-- stdout --
	{"data":{"currentstep":"0","message":"[json-output-error-20210609011127-9941] minikube v1.21.0-beta.0 on Debian 9.13 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"},"datacontenttype":"application/json","id":"ebc375b8-3e37-4cbc-a663-77c615fe3afe","source":"https://minikube.sigs.k8s.io/","specversion":"1.0","type":"io.k8s.sigs.minikube.step"}
	{"data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/kubeconfig"},"datacontenttype":"application/json","id":"51513759-b867-4ced-9e3d-c2bd4a44738b","source":"https://minikube.sigs.k8s.io/","specversion":"1.0","type":"io.k8s.sigs.minikube.info"}
	{"data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"},"datacontenttype":"application/json","id":"6594d268-a853-4e0c-9b4d-6d1c8f195656","source":"https://minikube.sigs.k8s.io/","specversion":"1.0","type":"io.k8s.sigs.minikube.info"}
	{"data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube"},"datacontenttype":"application/json","id":"c9ecfdea-d5bd-4505-9ea9-94414660f682","source":"https://minikube.sigs.k8s.io/","specversion":"1.0","type":"io.k8s.sigs.minikube.info"}
	{"data":{"message":"MINIKUBE_LOCATION=11610"},"datacontenttype":"application/json","id":"d9187b58-2dba-4d64-889f-7dc7fbedbeed","source":"https://minikube.sigs.k8s.io/","specversion":"1.0","type":"io.k8s.sigs.minikube.info"}
	{"data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/amd64","name":"DRV_UNSUPPORTED_OS","url":""},"datacontenttype":"application/json","id":"1e5ee228-38bb-474c-87cb-e288f37bd5db","source":"https://minikube.sigs.k8s.io/","specversion":"1.0","type":"io.k8s.sigs.minikube.error"}

                                                
                                                
-- /stdout --
helpers_test.go:171: Cleaning up "json-output-error-20210609011127-9941" profile ...
helpers_test.go:174: (dbg) Run:  out/minikube-linux-amd64 delete -p json-output-error-20210609011127-9941
--- PASS: TestErrorJSONOutput (0.34s)

                                                
                                    
x
+
TestKicCustomNetwork/create_custom_network (26.74s)

                                                
                                                
=== RUN   TestKicCustomNetwork/create_custom_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-amd64 start -p docker-network-20210609011127-9941 --network=
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-amd64 start -p docker-network-20210609011127-9941 --network=: (24.591274873s)
kic_custom_network_test.go:101: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:171: Cleaning up "docker-network-20210609011127-9941" profile ...
helpers_test.go:174: (dbg) Run:  out/minikube-linux-amd64 delete -p docker-network-20210609011127-9941
helpers_test.go:174: (dbg) Done: out/minikube-linux-amd64 delete -p docker-network-20210609011127-9941: (2.103398667s)
--- PASS: TestKicCustomNetwork/create_custom_network (26.74s)

                                                
                                    
x
+
TestKicCustomNetwork/use_default_bridge_network (26.14s)

                                                
                                                
=== RUN   TestKicCustomNetwork/use_default_bridge_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-amd64 start -p docker-network-20210609011154-9941 --network=bridge
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-amd64 start -p docker-network-20210609011154-9941 --network=bridge: (23.82173298s)
kic_custom_network_test.go:101: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:171: Cleaning up "docker-network-20210609011154-9941" profile ...
helpers_test.go:174: (dbg) Run:  out/minikube-linux-amd64 delete -p docker-network-20210609011154-9941
helpers_test.go:174: (dbg) Done: out/minikube-linux-amd64 delete -p docker-network-20210609011154-9941: (2.283137234s)
--- PASS: TestKicCustomNetwork/use_default_bridge_network (26.14s)

                                                
                                    
x
+
TestKicExistingNetwork (26.71s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:101: (dbg) Run:  docker network ls --format {{.Name}}
kic_custom_network_test.go:93: (dbg) Run:  out/minikube-linux-amd64 start -p existing-network-20210609011220-9941 --network=existing-network
kic_custom_network_test.go:93: (dbg) Done: out/minikube-linux-amd64 start -p existing-network-20210609011220-9941 --network=existing-network: (24.40622551s)
helpers_test.go:171: Cleaning up "existing-network-20210609011220-9941" profile ...
helpers_test.go:174: (dbg) Run:  out/minikube-linux-amd64 delete -p existing-network-20210609011220-9941
helpers_test.go:174: (dbg) Done: out/minikube-linux-amd64 delete -p existing-network-20210609011220-9941: (2.059923333s)
--- PASS: TestKicExistingNetwork (26.71s)

                                                
                                    
x
+
TestMainNoArgs (0.06s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-linux-amd64
--- PASS: TestMainNoArgs (0.06s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (140.94s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-20210609011247-9941 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker  --container-runtime=docker
E0609 01:13:34.046524    9941 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/profiles/functional-20210609010438-9941/client.crt: no such file or directory
E0609 01:13:34.051809    9941 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/profiles/functional-20210609010438-9941/client.crt: no such file or directory
E0609 01:13:34.062031    9941 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/profiles/functional-20210609010438-9941/client.crt: no such file or directory
E0609 01:13:34.082333    9941 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/profiles/functional-20210609010438-9941/client.crt: no such file or directory
E0609 01:13:34.122572    9941 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/profiles/functional-20210609010438-9941/client.crt: no such file or directory
E0609 01:13:34.202865    9941 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/profiles/functional-20210609010438-9941/client.crt: no such file or directory
E0609 01:13:34.363257    9941 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/profiles/functional-20210609010438-9941/client.crt: no such file or directory
E0609 01:13:34.683792    9941 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/profiles/functional-20210609010438-9941/client.crt: no such file or directory
E0609 01:13:35.324160    9941 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/profiles/functional-20210609010438-9941/client.crt: no such file or directory
E0609 01:13:36.604467    9941 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/profiles/functional-20210609010438-9941/client.crt: no such file or directory
E0609 01:13:39.165176    9941 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/profiles/functional-20210609010438-9941/client.crt: no such file or directory
E0609 01:13:44.285933    9941 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/profiles/functional-20210609010438-9941/client.crt: no such file or directory
E0609 01:13:54.526506    9941 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/profiles/functional-20210609010438-9941/client.crt: no such file or directory
E0609 01:14:15.007596    9941 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/profiles/functional-20210609010438-9941/client.crt: no such file or directory
E0609 01:14:55.968284    9941 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/profiles/functional-20210609010438-9941/client.crt: no such file or directory
multinode_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -p multinode-20210609011247-9941 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker  --container-runtime=docker: (2m20.409324852s)
multinode_test.go:86: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20210609011247-9941 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (140.94s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (5.93s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:431: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-20210609011247-9941 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:436: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-20210609011247-9941 -- rollout status deployment/busybox
multinode_test.go:436: (dbg) Done: out/minikube-linux-amd64 kubectl -p multinode-20210609011247-9941 -- rollout status deployment/busybox: (3.847287175s)
multinode_test.go:442: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-20210609011247-9941 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:454: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-20210609011247-9941 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:462: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-20210609011247-9941 -- exec busybox-6cd5ff77cb-8mpbz -- nslookup kubernetes.io
multinode_test.go:462: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-20210609011247-9941 -- exec busybox-6cd5ff77cb-t9l4k -- nslookup kubernetes.io
multinode_test.go:472: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-20210609011247-9941 -- exec busybox-6cd5ff77cb-8mpbz -- nslookup kubernetes.default
multinode_test.go:472: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-20210609011247-9941 -- exec busybox-6cd5ff77cb-t9l4k -- nslookup kubernetes.default
multinode_test.go:480: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-20210609011247-9941 -- exec busybox-6cd5ff77cb-8mpbz -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:480: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-20210609011247-9941 -- exec busybox-6cd5ff77cb-t9l4k -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (5.93s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (1.11s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:490: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-20210609011247-9941 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:498: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-20210609011247-9941 -- exec busybox-6cd5ff77cb-8mpbz -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:503: (dbg) Run:  out/minikube-linux-amd64 ssh -p multinode-20210609011247-9941 "ip -4 -br -o a s eth0 | tr -s ' ' | cut -d' ' -f3"
multinode_test.go:498: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-20210609011247-9941 -- exec busybox-6cd5ff77cb-t9l4k -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:503: (dbg) Run:  out/minikube-linux-amd64 ssh -p multinode-20210609011247-9941 "ip -4 -br -o a s eth0 | tr -s ' ' | cut -d' ' -f3"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (1.11s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (25.73s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:105: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-20210609011247-9941 -v 3 --alsologtostderr
multinode_test.go:105: (dbg) Done: out/minikube-linux-amd64 node add -p multinode-20210609011247-9941 -v 3 --alsologtostderr: (24.995244848s)
multinode_test.go:111: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20210609011247-9941 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (25.73s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.3s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:127: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.30s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (2.35s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:168: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20210609011247-9941 status --output json --alsologtostderr
functional_test.go:1467: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20210609011247-9941 cp testdata/cp-test.txt /home/docker/cp-test.txt
functional_test.go:1481: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20210609011247-9941 ssh "sudo cat /home/docker/cp-test.txt"
functional_test.go:1467: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20210609011247-9941 cp testdata/cp-test.txt multinode-20210609011247-9941-m02:/home/docker/cp-test.txt
functional_test.go:1481: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20210609011247-9941 ssh -n multinode-20210609011247-9941-m02 "sudo cat /home/docker/cp-test.txt"
functional_test.go:1467: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20210609011247-9941 cp testdata/cp-test.txt multinode-20210609011247-9941-m03:/home/docker/cp-test.txt
functional_test.go:1481: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20210609011247-9941 ssh -n multinode-20210609011247-9941-m03 "sudo cat /home/docker/cp-test.txt"
--- PASS: TestMultiNode/serial/CopyFile (2.35s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.56s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:190: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20210609011247-9941 node stop m03
E0609 01:15:43.574294    9941 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/profiles/addons-20210609005737-9941/client.crt: no such file or directory
multinode_test.go:190: (dbg) Done: out/minikube-linux-amd64 -p multinode-20210609011247-9941 node stop m03: (1.420571403s)
multinode_test.go:196: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20210609011247-9941 status
multinode_test.go:196: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-20210609011247-9941 status: exit status 7 (570.865395ms)

                                                
                                                
-- stdout --
	multinode-20210609011247-9941
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-20210609011247-9941-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-20210609011247-9941-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:203: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20210609011247-9941 status --alsologtostderr
multinode_test.go:203: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-20210609011247-9941 status --alsologtostderr: exit status 7 (564.724807ms)

                                                
                                                
-- stdout --
	multinode-20210609011247-9941
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-20210609011247-9941-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-20210609011247-9941-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0609 01:15:45.595476  116184 out.go:291] Setting OutFile to fd 1 ...
	I0609 01:15:45.595556  116184 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0609 01:15:45.595565  116184 out.go:304] Setting ErrFile to fd 2...
	I0609 01:15:45.595569  116184 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0609 01:15:45.595668  116184 root.go:316] Updating PATH: /home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/bin
	I0609 01:15:45.595820  116184 out.go:298] Setting JSON to false
	I0609 01:15:45.595836  116184 mustload.go:65] Loading cluster: multinode-20210609011247-9941
	I0609 01:15:45.596129  116184 status.go:253] checking status of multinode-20210609011247-9941 ...
	I0609 01:15:45.596502  116184 cli_runner.go:115] Run: docker container inspect multinode-20210609011247-9941 --format={{.State.Status}}
	I0609 01:15:45.635573  116184 status.go:328] multinode-20210609011247-9941 host status = "Running" (err=<nil>)
	I0609 01:15:45.635604  116184 host.go:66] Checking if "multinode-20210609011247-9941" exists ...
	I0609 01:15:45.635826  116184 cli_runner.go:115] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-20210609011247-9941
	I0609 01:15:45.677653  116184 host.go:66] Checking if "multinode-20210609011247-9941" exists ...
	I0609 01:15:45.677941  116184 ssh_runner.go:149] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0609 01:15:45.677981  116184 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20210609011247-9941
	I0609 01:15:45.715620  116184 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32807 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/machines/multinode-20210609011247-9941/id_rsa Username:docker}
	I0609 01:15:45.805973  116184 ssh_runner.go:149] Run: systemctl --version
	I0609 01:15:45.809343  116184 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service kubelet
	I0609 01:15:45.818629  116184 kubeconfig.go:93] found "multinode-20210609011247-9941" server: "https://192.168.49.2:8443"
	I0609 01:15:45.818652  116184 api_server.go:148] Checking apiserver status ...
	I0609 01:15:45.818678  116184 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0609 01:15:45.836918  116184 ssh_runner.go:149] Run: sudo egrep ^[0-9]+:freezer: /proc/1910/cgroup
	I0609 01:15:45.844069  116184 api_server.go:164] apiserver freezer: "8:freezer:/docker/3d95e932f971403a9553e8d726483686f46bf32ff79f73e4d3615267e69edd40/kubepods/burstable/pod01d7e312da0f9c4176daa8464d4d1a50/8715ed4a0ed25a283ad654ddd03c2ee97a7d21b03fd6ffbc1ba2cb0f7878d8d8"
	I0609 01:15:45.844125  116184 ssh_runner.go:149] Run: sudo cat /sys/fs/cgroup/freezer/docker/3d95e932f971403a9553e8d726483686f46bf32ff79f73e4d3615267e69edd40/kubepods/burstable/pod01d7e312da0f9c4176daa8464d4d1a50/8715ed4a0ed25a283ad654ddd03c2ee97a7d21b03fd6ffbc1ba2cb0f7878d8d8/freezer.state
	I0609 01:15:45.850169  116184 api_server.go:186] freezer state: "THAWED"
	I0609 01:15:45.850192  116184 api_server.go:223] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0609 01:15:45.854737  116184 api_server.go:249] https://192.168.49.2:8443/healthz returned 200:
	ok
	I0609 01:15:45.854760  116184 status.go:419] multinode-20210609011247-9941 apiserver status = Running (err=<nil>)
	I0609 01:15:45.854773  116184 status.go:255] multinode-20210609011247-9941 status: &{Name:multinode-20210609011247-9941 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0609 01:15:45.854800  116184 status.go:253] checking status of multinode-20210609011247-9941-m02 ...
	I0609 01:15:45.855067  116184 cli_runner.go:115] Run: docker container inspect multinode-20210609011247-9941-m02 --format={{.State.Status}}
	I0609 01:15:45.894539  116184 status.go:328] multinode-20210609011247-9941-m02 host status = "Running" (err=<nil>)
	I0609 01:15:45.894572  116184 host.go:66] Checking if "multinode-20210609011247-9941-m02" exists ...
	I0609 01:15:45.894846  116184 cli_runner.go:115] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-20210609011247-9941-m02
	I0609 01:15:45.935954  116184 host.go:66] Checking if "multinode-20210609011247-9941-m02" exists ...
	I0609 01:15:45.936345  116184 ssh_runner.go:149] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0609 01:15:45.936391  116184 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20210609011247-9941-m02
	I0609 01:15:45.974599  116184 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32812 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/machines/multinode-20210609011247-9941-m02/id_rsa Username:docker}
	I0609 01:15:46.057629  116184 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service kubelet
	I0609 01:15:46.065693  116184 status.go:255] multinode-20210609011247-9941-m02 status: &{Name:multinode-20210609011247-9941-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I0609 01:15:46.065723  116184 status.go:253] checking status of multinode-20210609011247-9941-m03 ...
	I0609 01:15:46.065955  116184 cli_runner.go:115] Run: docker container inspect multinode-20210609011247-9941-m03 --format={{.State.Status}}
	I0609 01:15:46.103364  116184 status.go:328] multinode-20210609011247-9941-m03 host status = "Stopped" (err=<nil>)
	I0609 01:15:46.103388  116184 status.go:341] host is not running, skipping remaining checks
	I0609 01:15:46.103395  116184 status.go:255] multinode-20210609011247-9941-m03 status: &{Name:multinode-20210609011247-9941-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.56s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (24.89s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:224: (dbg) Run:  docker version -f {{.Server.Version}}
multinode_test.go:234: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20210609011247-9941 node start m03 --alsologtostderr
multinode_test.go:234: (dbg) Done: out/minikube-linux-amd64 -p multinode-20210609011247-9941 node start m03 --alsologtostderr: (24.033495667s)
multinode_test.go:241: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20210609011247-9941 status
multinode_test.go:255: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (24.89s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (5.39s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:344: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20210609011247-9941 node delete m03
multinode_test.go:344: (dbg) Done: out/minikube-linux-amd64 -p multinode-20210609011247-9941 node delete m03: (4.712020324s)
multinode_test.go:350: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20210609011247-9941 status --alsologtostderr
multinode_test.go:364: (dbg) Run:  docker volume ls
multinode_test.go:374: (dbg) Run:  kubectl get nodes
multinode_test.go:382: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (5.39s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (22.11s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:264: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20210609011247-9941 stop
E0609 01:16:17.888734    9941 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/profiles/functional-20210609010438-9941/client.crt: no such file or directory
multinode_test.go:264: (dbg) Done: out/minikube-linux-amd64 -p multinode-20210609011247-9941 stop: (21.849103103s)
multinode_test.go:270: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20210609011247-9941 status
multinode_test.go:270: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-20210609011247-9941 status: exit status 7 (133.556818ms)

                                                
                                                
-- stdout --
	multinode-20210609011247-9941
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-20210609011247-9941-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:277: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20210609011247-9941 status --alsologtostderr
multinode_test.go:277: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-20210609011247-9941 status --alsologtostderr: exit status 7 (130.537759ms)

                                                
                                                
-- stdout --
	multinode-20210609011247-9941
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-20210609011247-9941-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0609 01:16:38.423312  121060 out.go:291] Setting OutFile to fd 1 ...
	I0609 01:16:38.423515  121060 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0609 01:16:38.423525  121060 out.go:304] Setting ErrFile to fd 2...
	I0609 01:16:38.423528  121060 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0609 01:16:38.423625  121060 root.go:316] Updating PATH: /home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/bin
	I0609 01:16:38.423792  121060 out.go:298] Setting JSON to false
	I0609 01:16:38.423811  121060 mustload.go:65] Loading cluster: multinode-20210609011247-9941
	I0609 01:16:38.424049  121060 status.go:253] checking status of multinode-20210609011247-9941 ...
	I0609 01:16:38.424392  121060 cli_runner.go:115] Run: docker container inspect multinode-20210609011247-9941 --format={{.State.Status}}
	I0609 01:16:38.461328  121060 status.go:328] multinode-20210609011247-9941 host status = "Stopped" (err=<nil>)
	I0609 01:16:38.461351  121060 status.go:341] host is not running, skipping remaining checks
	I0609 01:16:38.461356  121060 status.go:255] multinode-20210609011247-9941 status: &{Name:multinode-20210609011247-9941 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0609 01:16:38.461377  121060 status.go:253] checking status of multinode-20210609011247-9941-m02 ...
	I0609 01:16:38.461649  121060 cli_runner.go:115] Run: docker container inspect multinode-20210609011247-9941-m02 --format={{.State.Status}}
	I0609 01:16:38.498355  121060 status.go:328] multinode-20210609011247-9941-m02 host status = "Stopped" (err=<nil>)
	I0609 01:16:38.498377  121060 status.go:341] host is not running, skipping remaining checks
	I0609 01:16:38.498384  121060 status.go:255] multinode-20210609011247-9941-m02 status: &{Name:multinode-20210609011247-9941-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (22.11s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (130.54s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:294: (dbg) Run:  docker version -f {{.Server.Version}}
multinode_test.go:304: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-20210609011247-9941 --wait=true -v=8 --alsologtostderr --driver=docker  --container-runtime=docker
E0609 01:18:34.046520    9941 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/profiles/functional-20210609010438-9941/client.crt: no such file or directory
multinode_test.go:304: (dbg) Done: out/minikube-linux-amd64 start -p multinode-20210609011247-9941 --wait=true -v=8 --alsologtostderr --driver=docker  --container-runtime=docker: (2m9.840038822s)
multinode_test.go:310: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20210609011247-9941 status --alsologtostderr
multinode_test.go:324: (dbg) Run:  kubectl get nodes
multinode_test.go:332: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (130.54s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (26.81s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:393: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-20210609011247-9941
multinode_test.go:402: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-20210609011247-9941-m02 --driver=docker  --container-runtime=docker
multinode_test.go:402: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p multinode-20210609011247-9941-m02 --driver=docker  --container-runtime=docker: exit status 14 (104.415505ms)

                                                
                                                
-- stdout --
	* [multinode-20210609011247-9941-m02] minikube v1.21.0-beta.0 on Debian 9.13 (kvm/amd64)
	  - KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/kubeconfig
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube
	  - MINIKUBE_LOCATION=11610
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-20210609011247-9941-m02' is duplicated with machine name 'multinode-20210609011247-9941-m02' in profile 'multinode-20210609011247-9941'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:410: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-20210609011247-9941-m03 --driver=docker  --container-runtime=docker
E0609 01:19:01.729026    9941 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/profiles/functional-20210609010438-9941/client.crt: no such file or directory
multinode_test.go:410: (dbg) Done: out/minikube-linux-amd64 start -p multinode-20210609011247-9941-m03 --driver=docker  --container-runtime=docker: (24.200120326s)
multinode_test.go:417: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-20210609011247-9941
multinode_test.go:417: (dbg) Non-zero exit: out/minikube-linux-amd64 node add -p multinode-20210609011247-9941: exit status 80 (275.296999ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-20210609011247-9941
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: Node multinode-20210609011247-9941-m03 already exists in multinode-20210609011247-9941-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────╮
	│                                                                             │
	│    * If the above advice does not help, please let us know:                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose               │
	│                                                                             │
	│    * Please attach the following file to the GitHub issue:                  │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log    │
	│                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────╯
	

                                                
                                                
** /stderr **
multinode_test.go:422: (dbg) Run:  out/minikube-linux-amd64 delete -p multinode-20210609011247-9941-m03
multinode_test.go:422: (dbg) Done: out/minikube-linux-amd64 delete -p multinode-20210609011247-9941-m03: (2.174428922s)
--- PASS: TestMultiNode/serial/ValidateNameConflict (26.81s)

                                                
                                    
x
+
TestDebPackageInstall/install_amd64_debian:sid/minikube (0s)

                                                
                                                
=== RUN   TestDebPackageInstall/install_amd64_debian:sid/minikube
--- PASS: TestDebPackageInstall/install_amd64_debian:sid/minikube (0.00s)

                                                
                                    
x
+
TestDebPackageInstall/install_amd64_debian:sid/kvm2-driver (11.28s)

                                                
                                                
=== RUN   TestDebPackageInstall/install_amd64_debian:sid/kvm2-driver
pkg_install_test.go:104: (dbg) Run:  docker run --rm -v/home/jenkins/workspace/docker_Linux_integration/out:/var/tmp debian:sid sh -c "apt-get update; apt-get install -y libvirt0; dpkg -i /var/tmp/docker-machine-driver-kvm2_1.21.0~beta.0-0_amd64.deb"
pkg_install_test.go:104: (dbg) Done: docker run --rm -v/home/jenkins/workspace/docker_Linux_integration/out:/var/tmp debian:sid sh -c "apt-get update; apt-get install -y libvirt0; dpkg -i /var/tmp/docker-machine-driver-kvm2_1.21.0~beta.0-0_amd64.deb": (11.279641229s)
--- PASS: TestDebPackageInstall/install_amd64_debian:sid/kvm2-driver (11.28s)

                                                
                                    
x
+
TestDebPackageInstall/install_amd64_debian:latest/minikube (0s)

                                                
                                                
=== RUN   TestDebPackageInstall/install_amd64_debian:latest/minikube
--- PASS: TestDebPackageInstall/install_amd64_debian:latest/minikube (0.00s)

                                                
                                    
x
+
TestDebPackageInstall/install_amd64_debian:latest/kvm2-driver (10.23s)

                                                
                                                
=== RUN   TestDebPackageInstall/install_amd64_debian:latest/kvm2-driver
pkg_install_test.go:104: (dbg) Run:  docker run --rm -v/home/jenkins/workspace/docker_Linux_integration/out:/var/tmp debian:latest sh -c "apt-get update; apt-get install -y libvirt0; dpkg -i /var/tmp/docker-machine-driver-kvm2_1.21.0~beta.0-0_amd64.deb"
pkg_install_test.go:104: (dbg) Done: docker run --rm -v/home/jenkins/workspace/docker_Linux_integration/out:/var/tmp debian:latest sh -c "apt-get update; apt-get install -y libvirt0; dpkg -i /var/tmp/docker-machine-driver-kvm2_1.21.0~beta.0-0_amd64.deb": (10.225642417s)
--- PASS: TestDebPackageInstall/install_amd64_debian:latest/kvm2-driver (10.23s)

                                                
                                    
x
+
TestDebPackageInstall/install_amd64_debian:10/minikube (0s)

                                                
                                                
=== RUN   TestDebPackageInstall/install_amd64_debian:10/minikube
--- PASS: TestDebPackageInstall/install_amd64_debian:10/minikube (0.00s)

                                                
                                    
x
+
TestDebPackageInstall/install_amd64_debian:10/kvm2-driver (9.7s)

                                                
                                                
=== RUN   TestDebPackageInstall/install_amd64_debian:10/kvm2-driver
pkg_install_test.go:104: (dbg) Run:  docker run --rm -v/home/jenkins/workspace/docker_Linux_integration/out:/var/tmp debian:10 sh -c "apt-get update; apt-get install -y libvirt0; dpkg -i /var/tmp/docker-machine-driver-kvm2_1.21.0~beta.0-0_amd64.deb"
pkg_install_test.go:104: (dbg) Done: docker run --rm -v/home/jenkins/workspace/docker_Linux_integration/out:/var/tmp debian:10 sh -c "apt-get update; apt-get install -y libvirt0; dpkg -i /var/tmp/docker-machine-driver-kvm2_1.21.0~beta.0-0_amd64.deb": (9.699018569s)
--- PASS: TestDebPackageInstall/install_amd64_debian:10/kvm2-driver (9.70s)

                                                
                                    
x
+
TestDebPackageInstall/install_amd64_debian:9/minikube (0s)

                                                
                                                
=== RUN   TestDebPackageInstall/install_amd64_debian:9/minikube
--- PASS: TestDebPackageInstall/install_amd64_debian:9/minikube (0.00s)

                                                
                                    
x
+
TestDebPackageInstall/install_amd64_debian:9/kvm2-driver (8.53s)

                                                
                                                
=== RUN   TestDebPackageInstall/install_amd64_debian:9/kvm2-driver
pkg_install_test.go:104: (dbg) Run:  docker run --rm -v/home/jenkins/workspace/docker_Linux_integration/out:/var/tmp debian:9 sh -c "apt-get update; apt-get install -y libvirt0; dpkg -i /var/tmp/docker-machine-driver-kvm2_1.21.0~beta.0-0_amd64.deb"
pkg_install_test.go:104: (dbg) Done: docker run --rm -v/home/jenkins/workspace/docker_Linux_integration/out:/var/tmp debian:9 sh -c "apt-get update; apt-get install -y libvirt0; dpkg -i /var/tmp/docker-machine-driver-kvm2_1.21.0~beta.0-0_amd64.deb": (8.525981251s)
--- PASS: TestDebPackageInstall/install_amd64_debian:9/kvm2-driver (8.53s)

                                                
                                    
x
+
TestDebPackageInstall/install_amd64_ubuntu:latest/minikube (0s)

                                                
                                                
=== RUN   TestDebPackageInstall/install_amd64_ubuntu:latest/minikube
--- PASS: TestDebPackageInstall/install_amd64_ubuntu:latest/minikube (0.00s)

                                                
                                    
x
+
TestDebPackageInstall/install_amd64_ubuntu:latest/kvm2-driver (16.8s)

                                                
                                                
=== RUN   TestDebPackageInstall/install_amd64_ubuntu:latest/kvm2-driver
pkg_install_test.go:104: (dbg) Run:  docker run --rm -v/home/jenkins/workspace/docker_Linux_integration/out:/var/tmp ubuntu:latest sh -c "apt-get update; apt-get install -y libvirt0; dpkg -i /var/tmp/docker-machine-driver-kvm2_1.21.0~beta.0-0_amd64.deb"
pkg_install_test.go:104: (dbg) Done: docker run --rm -v/home/jenkins/workspace/docker_Linux_integration/out:/var/tmp ubuntu:latest sh -c "apt-get update; apt-get install -y libvirt0; dpkg -i /var/tmp/docker-machine-driver-kvm2_1.21.0~beta.0-0_amd64.deb": (16.796136693s)
--- PASS: TestDebPackageInstall/install_amd64_ubuntu:latest/kvm2-driver (16.80s)

                                                
                                    
x
+
TestDebPackageInstall/install_amd64_ubuntu:20.10/minikube (0s)

                                                
                                                
=== RUN   TestDebPackageInstall/install_amd64_ubuntu:20.10/minikube
--- PASS: TestDebPackageInstall/install_amd64_ubuntu:20.10/minikube (0.00s)

                                                
                                    
x
+
TestDebPackageInstall/install_amd64_ubuntu:20.10/kvm2-driver (16.33s)

                                                
                                                
=== RUN   TestDebPackageInstall/install_amd64_ubuntu:20.10/kvm2-driver
pkg_install_test.go:104: (dbg) Run:  docker run --rm -v/home/jenkins/workspace/docker_Linux_integration/out:/var/tmp ubuntu:20.10 sh -c "apt-get update; apt-get install -y libvirt0; dpkg -i /var/tmp/docker-machine-driver-kvm2_1.21.0~beta.0-0_amd64.deb"
pkg_install_test.go:104: (dbg) Done: docker run --rm -v/home/jenkins/workspace/docker_Linux_integration/out:/var/tmp ubuntu:20.10 sh -c "apt-get update; apt-get install -y libvirt0; dpkg -i /var/tmp/docker-machine-driver-kvm2_1.21.0~beta.0-0_amd64.deb": (16.327632019s)
--- PASS: TestDebPackageInstall/install_amd64_ubuntu:20.10/kvm2-driver (16.33s)

                                                
                                    
x
+
TestDebPackageInstall/install_amd64_ubuntu:20.04/minikube (0s)

                                                
                                                
=== RUN   TestDebPackageInstall/install_amd64_ubuntu:20.04/minikube
--- PASS: TestDebPackageInstall/install_amd64_ubuntu:20.04/minikube (0.00s)

                                                
                                    
x
+
TestDebPackageInstall/install_amd64_ubuntu:20.04/kvm2-driver (16.37s)

                                                
                                                
=== RUN   TestDebPackageInstall/install_amd64_ubuntu:20.04/kvm2-driver
pkg_install_test.go:104: (dbg) Run:  docker run --rm -v/home/jenkins/workspace/docker_Linux_integration/out:/var/tmp ubuntu:20.04 sh -c "apt-get update; apt-get install -y libvirt0; dpkg -i /var/tmp/docker-machine-driver-kvm2_1.21.0~beta.0-0_amd64.deb"
E0609 01:20:43.573739    9941 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/profiles/addons-20210609005737-9941/client.crt: no such file or directory
pkg_install_test.go:104: (dbg) Done: docker run --rm -v/home/jenkins/workspace/docker_Linux_integration/out:/var/tmp ubuntu:20.04 sh -c "apt-get update; apt-get install -y libvirt0; dpkg -i /var/tmp/docker-machine-driver-kvm2_1.21.0~beta.0-0_amd64.deb": (16.365060494s)
--- PASS: TestDebPackageInstall/install_amd64_ubuntu:20.04/kvm2-driver (16.37s)

                                                
                                    
x
+
TestDebPackageInstall/install_amd64_ubuntu:18.04/minikube (0s)

                                                
                                                
=== RUN   TestDebPackageInstall/install_amd64_ubuntu:18.04/minikube
--- PASS: TestDebPackageInstall/install_amd64_ubuntu:18.04/minikube (0.00s)

                                                
                                    
x
+
TestDebPackageInstall/install_amd64_ubuntu:18.04/kvm2-driver (15.34s)

                                                
                                                
=== RUN   TestDebPackageInstall/install_amd64_ubuntu:18.04/kvm2-driver
pkg_install_test.go:104: (dbg) Run:  docker run --rm -v/home/jenkins/workspace/docker_Linux_integration/out:/var/tmp ubuntu:18.04 sh -c "apt-get update; apt-get install -y libvirt0; dpkg -i /var/tmp/docker-machine-driver-kvm2_1.21.0~beta.0-0_amd64.deb"
pkg_install_test.go:104: (dbg) Done: docker run --rm -v/home/jenkins/workspace/docker_Linux_integration/out:/var/tmp ubuntu:18.04 sh -c "apt-get update; apt-get install -y libvirt0; dpkg -i /var/tmp/docker-machine-driver-kvm2_1.21.0~beta.0-0_amd64.deb": (15.335260145s)
--- PASS: TestDebPackageInstall/install_amd64_ubuntu:18.04/kvm2-driver (15.34s)

                                                
                                    
x
+
TestPreload (115.25s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:48: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-20210609012104-9941 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.17.0
E0609 01:22:06.618786    9941 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/profiles/addons-20210609005737-9941/client.crt: no such file or directory
preload_test.go:48: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-20210609012104-9941 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.17.0: (1m18.59417754s)
preload_test.go:61: (dbg) Run:  out/minikube-linux-amd64 ssh -p test-preload-20210609012104-9941 -- docker pull busybox
preload_test.go:61: (dbg) Done: out/minikube-linux-amd64 ssh -p test-preload-20210609012104-9941 -- docker pull busybox: (2.515976866s)
preload_test.go:71: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-20210609012104-9941 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=docker --kubernetes-version=v1.17.3
preload_test.go:71: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-20210609012104-9941 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=docker --kubernetes-version=v1.17.3: (30.666739245s)
preload_test.go:80: (dbg) Run:  out/minikube-linux-amd64 ssh -p test-preload-20210609012104-9941 -- docker images
helpers_test.go:171: Cleaning up "test-preload-20210609012104-9941" profile ...
helpers_test.go:174: (dbg) Run:  out/minikube-linux-amd64 delete -p test-preload-20210609012104-9941
helpers_test.go:174: (dbg) Done: out/minikube-linux-amd64 delete -p test-preload-20210609012104-9941: (3.168681562s)
--- PASS: TestPreload (115.25s)

                                                
                                    
x
+
TestScheduledStopUnix (52.63s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:126: (dbg) Run:  out/minikube-linux-amd64 start -p scheduled-stop-20210609012300-9941 --memory=2048 --driver=docker  --container-runtime=docker
scheduled_stop_test.go:126: (dbg) Done: out/minikube-linux-amd64 start -p scheduled-stop-20210609012300-9941 --memory=2048 --driver=docker  --container-runtime=docker: (25.180439796s)
scheduled_stop_test.go:135: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-20210609012300-9941 --schedule 5m
scheduled_stop_test.go:189: (dbg) Run:  out/minikube-linux-amd64 status --format={{.TimeToStop}} -p scheduled-stop-20210609012300-9941 -n scheduled-stop-20210609012300-9941
scheduled_stop_test.go:167: signal error was:  <nil>
scheduled_stop_test.go:135: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-20210609012300-9941 --schedule 8s
scheduled_stop_test.go:167: signal error was:  os: process already finished
scheduled_stop_test.go:135: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-20210609012300-9941 --cancel-scheduled
E0609 01:23:34.046406    9941 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/profiles/functional-20210609010438-9941/client.crt: no such file or directory
scheduled_stop_test.go:174: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-20210609012300-9941 -n scheduled-stop-20210609012300-9941
scheduled_stop_test.go:203: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-20210609012300-9941
scheduled_stop_test.go:135: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-20210609012300-9941 --schedule 5s
scheduled_stop_test.go:167: signal error was:  os: process already finished
scheduled_stop_test.go:203: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-20210609012300-9941
scheduled_stop_test.go:174: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-20210609012300-9941 -n scheduled-stop-20210609012300-9941
scheduled_stop_test.go:174: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-20210609012300-9941 -n scheduled-stop-20210609012300-9941
scheduled_stop_test.go:174: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-20210609012300-9941 -n scheduled-stop-20210609012300-9941
scheduled_stop_test.go:174: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-20210609012300-9941 -n scheduled-stop-20210609012300-9941
scheduled_stop_test.go:174: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-20210609012300-9941 -n scheduled-stop-20210609012300-9941: exit status 3 (2.44582048s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0609 01:23:48.326136  172646 status.go:374] failed to get storage capacity of /var: NewSession: new client: new client: ssh: handshake failed: read tcp 127.0.0.1:56420->127.0.0.1:32847: read: connection reset by peer
	E0609 01:23:48.326153  172646 status.go:247] status error: NewSession: new client: new client: ssh: handshake failed: read tcp 127.0.0.1:56420->127.0.0.1:32847: read: connection reset by peer

                                                
                                                
** /stderr **
scheduled_stop_test.go:174: status error: exit status 3 (may be ok)
scheduled_stop_test.go:174: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-20210609012300-9941 -n scheduled-stop-20210609012300-9941
scheduled_stop_test.go:174: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-20210609012300-9941 -n scheduled-stop-20210609012300-9941: exit status 7 (94.995611ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:174: status error: exit status 7 (may be ok)
helpers_test.go:171: Cleaning up "scheduled-stop-20210609012300-9941" profile ...
helpers_test.go:174: (dbg) Run:  out/minikube-linux-amd64 delete -p scheduled-stop-20210609012300-9941
helpers_test.go:174: (dbg) Done: out/minikube-linux-amd64 delete -p scheduled-stop-20210609012300-9941: (1.971363558s)
--- PASS: TestScheduledStopUnix (52.63s)

                                                
                                    
x
+
TestSkaffold (70.2s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:57: (dbg) Run:  /tmp/skaffold.exe793749241 version
skaffold_test.go:61: skaffold version: v1.25.0
skaffold_test.go:64: (dbg) Run:  out/minikube-linux-amd64 start -p skaffold-20210609012352-9941 --memory=2600 --driver=docker  --container-runtime=docker
skaffold_test.go:64: (dbg) Done: out/minikube-linux-amd64 start -p skaffold-20210609012352-9941 --memory=2600 --driver=docker  --container-runtime=docker: (24.106036407s)
skaffold_test.go:77: copying out/minikube-linux-amd64 to /home/jenkins/workspace/docker_Linux_integration/out/minikube
skaffold_test.go:101: (dbg) Run:  /tmp/skaffold.exe793749241 run --minikube-profile skaffold-20210609012352-9941 --kube-context skaffold-20210609012352-9941 --status-check=true --port-forward=false --interactive=false
skaffold_test.go:101: (dbg) Done: /tmp/skaffold.exe793749241 run --minikube-profile skaffold-20210609012352-9941 --kube-context skaffold-20210609012352-9941 --status-check=true --port-forward=false --interactive=false: (31.925775022s)
skaffold_test.go:107: (dbg) TestSkaffold: waiting 1m0s for pods matching "app=leeroy-app" in namespace "default" ...
helpers_test.go:335: "leeroy-app-7597c65474-54m49" [0298ff45-4033-440f-9472-e3e95e0bd0b9] Running
skaffold_test.go:107: (dbg) TestSkaffold: app=leeroy-app healthy within 5.011343407s
skaffold_test.go:110: (dbg) TestSkaffold: waiting 1m0s for pods matching "app=leeroy-web" in namespace "default" ...
helpers_test.go:335: "leeroy-web-7f9f9c9599-qsdsw" [b53b8547-2bc1-4ed9-bf32-7492259a0113] Running
skaffold_test.go:110: (dbg) TestSkaffold: app=leeroy-web healthy within 5.008449965s
helpers_test.go:171: Cleaning up "skaffold-20210609012352-9941" profile ...
helpers_test.go:174: (dbg) Run:  out/minikube-linux-amd64 delete -p skaffold-20210609012352-9941
helpers_test.go:174: (dbg) Done: out/minikube-linux-amd64 delete -p skaffold-20210609012352-9941: (3.403739571s)
--- PASS: TestSkaffold (70.20s)

                                                
                                    
x
+
TestInsufficientStorage (9.11s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:50: (dbg) Run:  out/minikube-linux-amd64 start -p insufficient-storage-20210609012502-9941 --memory=2048 --output=json --wait=true --driver=docker  --container-runtime=docker
status_test.go:50: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p insufficient-storage-20210609012502-9941 --memory=2048 --output=json --wait=true --driver=docker  --container-runtime=docker: exit status 26 (6.497213452s)

                                                
                                                
-- stdout --
	{"data":{"currentstep":"0","message":"[insufficient-storage-20210609012502-9941] minikube v1.21.0-beta.0 on Debian 9.13 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"},"datacontenttype":"application/json","id":"304ded68-639a-443e-8d65-34678ae049f2","source":"https://minikube.sigs.k8s.io/","specversion":"1.0","type":"io.k8s.sigs.minikube.step"}
	{"data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/kubeconfig"},"datacontenttype":"application/json","id":"3e63039b-fd54-4ca2-bed1-8bc2cacac61d","source":"https://minikube.sigs.k8s.io/","specversion":"1.0","type":"io.k8s.sigs.minikube.info"}
	{"data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"},"datacontenttype":"application/json","id":"5fb6e19c-3b6f-4cf1-bde1-1ee2176f09d8","source":"https://minikube.sigs.k8s.io/","specversion":"1.0","type":"io.k8s.sigs.minikube.info"}
	{"data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube"},"datacontenttype":"application/json","id":"57e1f6cf-4ca9-4cd3-9862-f160b4b83137","source":"https://minikube.sigs.k8s.io/","specversion":"1.0","type":"io.k8s.sigs.minikube.info"}
	{"data":{"message":"MINIKUBE_LOCATION=11610"},"datacontenttype":"application/json","id":"9e890aa9-f02c-4361-8b13-d38835e03218","source":"https://minikube.sigs.k8s.io/","specversion":"1.0","type":"io.k8s.sigs.minikube.info"}
	{"data":{"message":"MINIKUBE_TEST_STORAGE_CAPACITY=100"},"datacontenttype":"application/json","id":"ed322a16-aeac-42d0-a420-445b354e9960","source":"https://minikube.sigs.k8s.io/","specversion":"1.0","type":"io.k8s.sigs.minikube.info"}
	{"data":{"currentstep":"1","message":"Using the docker driver based on user configuration","name":"Selecting Driver","totalsteps":"19"},"datacontenttype":"application/json","id":"1506b763-fe99-43a6-a60d-c2062621cfba","source":"https://minikube.sigs.k8s.io/","specversion":"1.0","type":"io.k8s.sigs.minikube.step"}
	{"data":{"message":"Your cgroup does not allow setting memory."},"datacontenttype":"application/json","id":"8e10d3f2-5b1c-4073-b555-a9d52e1989fe","source":"https://minikube.sigs.k8s.io/","specversion":"1.0","type":"io.k8s.sigs.minikube.warning"}
	{"data":{"message":"More information: https://docs.docker.com/engine/install/linux-postinstall/#your-kernel-does-not-support-cgroup-swap-limit-capabilities"},"datacontenttype":"application/json","id":"25a9e567-1edf-4ed3-9a26-1bfd91450a95","source":"https://minikube.sigs.k8s.io/","specversion":"1.0","type":"io.k8s.sigs.minikube.info"}
	{"data":{"currentstep":"3","message":"Starting control plane node insufficient-storage-20210609012502-9941 in cluster insufficient-storage-20210609012502-9941","name":"Starting Node","totalsteps":"19"},"datacontenttype":"application/json","id":"b7432386-b27c-49d6-b5d5-8e73b905799b","source":"https://minikube.sigs.k8s.io/","specversion":"1.0","type":"io.k8s.sigs.minikube.step"}
	{"data":{"currentstep":"5","message":"Pulling base image ...","name":"Pulling Base Image","totalsteps":"19"},"datacontenttype":"application/json","id":"20b65a06-d8b9-4d82-9a27-532169c38fff","source":"https://minikube.sigs.k8s.io/","specversion":"1.0","type":"io.k8s.sigs.minikube.step"}
	{"data":{"currentstep":"8","message":"Creating docker container (CPUs=2, Memory=2048MB) ...","name":"Creating Container","totalsteps":"19"},"datacontenttype":"application/json","id":"2ea5a604-6a27-4f52-a950-2c9d1eb57843","source":"https://minikube.sigs.k8s.io/","specversion":"1.0","type":"io.k8s.sigs.minikube.step"}
	{"data":{"advice":"Try one or more of the following to free up space on the device:\n\t\n\t\t\t1. Run \"docker system prune\" to remove unused Docker data (optionally with \"-a\")\n\t\t\t2. Increase the storage allocated to Docker for Desktop by clicking on:\n\t\t\t\tDocker icon \u003e Preferences \u003e Resources \u003e Disk Image Size\n\t\t\t3. Run \"minikube ssh -- docker system prune\" if using the Docker container runtime","exitcode":"26","issues":"https://github.com/kubernetes/minikube/issues/9024","message":"Docker is out of disk space! (/var is at 100%% of capacity)","name":"RSRC_DOCKER_STORAGE","url":""},"datacontenttype":"application/json","id":"3d65dcd5-0952-482d-adf6-ce5b2a88e707","source":"https://minikube.sigs.k8s.io/","specversion":"1.0","type":"io.k8s.sigs.minikube.error"}

                                                
                                                
-- /stdout --
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p insufficient-storage-20210609012502-9941 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p insufficient-storage-20210609012502-9941 --output=json --layout=cluster: exit status 7 (285.014608ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-20210609012502-9941","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","Step":"Creating Container","StepDetail":"Creating docker container (CPUs=2, Memory=2048MB) ...","BinaryVersion":"v1.21.0-beta.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-20210609012502-9941","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E0609 01:25:09.749702  181368 status.go:413] kubeconfig endpoint: extract IP: "insufficient-storage-20210609012502-9941" does not appear in /home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/kubeconfig

                                                
                                                
** /stderr **
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p insufficient-storage-20210609012502-9941 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p insufficient-storage-20210609012502-9941 --output=json --layout=cluster: exit status 7 (283.982756ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-20210609012502-9941","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","BinaryVersion":"v1.21.0-beta.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-20210609012502-9941","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E0609 01:25:10.034670  181428 status.go:413] kubeconfig endpoint: extract IP: "insufficient-storage-20210609012502-9941" does not appear in /home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/kubeconfig
	E0609 01:25:10.045267  181428 status.go:557] unable to read event log: stat: stat /home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/profiles/insufficient-storage-20210609012502-9941/events.json: no such file or directory

                                                
                                                
** /stderr **
helpers_test.go:171: Cleaning up "insufficient-storage-20210609012502-9941" profile ...
helpers_test.go:174: (dbg) Run:  out/minikube-linux-amd64 delete -p insufficient-storage-20210609012502-9941
helpers_test.go:174: (dbg) Done: out/minikube-linux-amd64 delete -p insufficient-storage-20210609012502-9941: (2.039198855s)
--- PASS: TestInsufficientStorage (9.11s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (94.76s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:119: (dbg) Run:  /tmp/minikube-v1.9.0.579275581.exe start -p running-upgrade-20210609012722-9941 --memory=2200 --vm-driver=docker  --container-runtime=docker

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:119: (dbg) Done: /tmp/minikube-v1.9.0.579275581.exe start -p running-upgrade-20210609012722-9941 --memory=2200 --vm-driver=docker  --container-runtime=docker: (54.224570139s)
version_upgrade_test.go:129: (dbg) Run:  out/minikube-linux-amd64 start -p running-upgrade-20210609012722-9941 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:129: (dbg) Done: out/minikube-linux-amd64 start -p running-upgrade-20210609012722-9941 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker: (37.383686511s)
helpers_test.go:171: Cleaning up "running-upgrade-20210609012722-9941" profile ...
helpers_test.go:174: (dbg) Run:  out/minikube-linux-amd64 delete -p running-upgrade-20210609012722-9941

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
helpers_test.go:174: (dbg) Done: out/minikube-linux-amd64 delete -p running-upgrade-20210609012722-9941: (2.52014434s)
--- PASS: TestRunningBinaryUpgrade (94.76s)

                                                
                                    
x
+
TestKubernetesUpgrade (128.38s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:227: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-20210609012512-9941 --memory=2200 --kubernetes-version=v1.14.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:227: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-20210609012512-9941 --memory=2200 --kubernetes-version=v1.14.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker: (1m18.900588323s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-linux-amd64 stop -p kubernetes-upgrade-20210609012512-9941

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:232: (dbg) Done: out/minikube-linux-amd64 stop -p kubernetes-upgrade-20210609012512-9941: (11.003326194s)
version_upgrade_test.go:237: (dbg) Run:  out/minikube-linux-amd64 -p kubernetes-upgrade-20210609012512-9941 status --format={{.Host}}
version_upgrade_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 -p kubernetes-upgrade-20210609012512-9941 status --format={{.Host}}: exit status 7 (98.740225ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:239: status error: exit status 7 (may be ok)
version_upgrade_test.go:248: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-20210609012512-9941 --memory=2200 --kubernetes-version=v1.22.0-alpha.2 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker
version_upgrade_test.go:248: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-20210609012512-9941 --memory=2200 --kubernetes-version=v1.22.0-alpha.2 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker: (20.554365456s)
version_upgrade_test.go:253: (dbg) Run:  kubectl --context kubernetes-upgrade-20210609012512-9941 version --output=json
version_upgrade_test.go:272: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:274: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-20210609012512-9941 --memory=2200 --kubernetes-version=v1.14.0 --driver=docker  --container-runtime=docker
version_upgrade_test.go:274: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p kubernetes-upgrade-20210609012512-9941 --memory=2200 --kubernetes-version=v1.14.0 --driver=docker  --container-runtime=docker: exit status 106 (119.651769ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-20210609012512-9941] minikube v1.21.0-beta.0 on Debian 9.13 (kvm/amd64)
	  - KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/kubeconfig
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube
	  - MINIKUBE_LOCATION=11610
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.22.0-alpha.2 cluster to v1.14.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.14.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-20210609012512-9941
	    minikube start -p kubernetes-upgrade-20210609012512-9941 --kubernetes-version=v1.14.0
	    
	    2) Create a second cluster with Kubernetes 1.14.0, by running:
	    
	    minikube start -p kubernetes-upgrade-20210609012512-99412 --kubernetes-version=v1.14.0
	    
	    3) Use the existing cluster at version Kubernetes 1.22.0-alpha.2, by running:
	    
	    minikube start -p kubernetes-upgrade-20210609012512-9941 --kubernetes-version=v1.22.0-alpha.2
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:278: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:280: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-20210609012512-9941 --memory=2200 --kubernetes-version=v1.22.0-alpha.2 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker
version_upgrade_test.go:280: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-20210609012512-9941 --memory=2200 --kubernetes-version=v1.22.0-alpha.2 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker: (14.398208141s)
helpers_test.go:171: Cleaning up "kubernetes-upgrade-20210609012512-9941" profile ...
helpers_test.go:174: (dbg) Run:  out/minikube-linux-amd64 delete -p kubernetes-upgrade-20210609012512-9941

                                                
                                                
=== CONT  TestKubernetesUpgrade
helpers_test.go:174: (dbg) Done: out/minikube-linux-amd64 delete -p kubernetes-upgrade-20210609012512-9941: (3.242592557s)
--- PASS: TestKubernetesUpgrade (128.38s)

                                                
                                    
x
+
TestMissingContainerUpgrade (130.15s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
=== PAUSE TestMissingContainerUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestMissingContainerUpgrade

                                                
                                                
=== CONT  TestMissingContainerUpgrade
version_upgrade_test.go:314: (dbg) Run:  /tmp/minikube-v1.9.1.298685444.exe start -p missing-upgrade-20210609012512-9941 --memory=2200 --driver=docker  --container-runtime=docker
E0609 01:25:43.573712    9941 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/profiles/addons-20210609005737-9941/client.crt: no such file or directory
version_upgrade_test.go:314: (dbg) Done: /tmp/minikube-v1.9.1.298685444.exe start -p missing-upgrade-20210609012512-9941 --memory=2200 --driver=docker  --container-runtime=docker: (1m9.630417879s)
version_upgrade_test.go:323: (dbg) Run:  docker stop missing-upgrade-20210609012512-9941

                                                
                                                
=== CONT  TestMissingContainerUpgrade
version_upgrade_test.go:323: (dbg) Done: docker stop missing-upgrade-20210609012512-9941: (10.485494649s)
version_upgrade_test.go:328: (dbg) Run:  docker rm missing-upgrade-20210609012512-9941
version_upgrade_test.go:334: (dbg) Run:  out/minikube-linux-amd64 start -p missing-upgrade-20210609012512-9941 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker

                                                
                                                
=== CONT  TestMissingContainerUpgrade
version_upgrade_test.go:334: (dbg) Done: out/minikube-linux-amd64 start -p missing-upgrade-20210609012512-9941 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker: (47.060076514s)
helpers_test.go:171: Cleaning up "missing-upgrade-20210609012512-9941" profile ...
helpers_test.go:174: (dbg) Run:  out/minikube-linux-amd64 delete -p missing-upgrade-20210609012512-9941

                                                
                                                
=== CONT  TestMissingContainerUpgrade
helpers_test.go:174: (dbg) Done: out/minikube-linux-amd64 delete -p missing-upgrade-20210609012512-9941: (2.489668295s)
--- PASS: TestMissingContainerUpgrade (130.15s)

                                                
                                    
x
+
TestPause/serial/Start (165.75s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:77: (dbg) Run:  out/minikube-linux-amd64 start -p pause-20210609012512-9941 --memory=2048 --install-addons=false --wait=all --driver=docker  --container-runtime=docker

                                                
                                                
=== CONT  TestPause/serial/Start
pause_test.go:77: (dbg) Done: out/minikube-linux-amd64 start -p pause-20210609012512-9941 --memory=2048 --install-addons=false --wait=all --driver=docker  --container-runtime=docker: (2m45.749549441s)
--- PASS: TestPause/serial/Start (165.75s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (5.5s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:89: (dbg) Run:  out/minikube-linux-amd64 start -p pause-20210609012512-9941 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker
pause_test.go:89: (dbg) Done: out/minikube-linux-amd64 start -p pause-20210609012512-9941 --alsologtostderr -v=1 --driver=docker  --container-runtime=docker: (5.489040401s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (5.50s)

                                                
                                    
x
+
TestPause/serial/Pause (0.53s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:107: (dbg) Run:  out/minikube-linux-amd64 pause -p pause-20210609012512-9941 --alsologtostderr -v=5
--- PASS: TestPause/serial/Pause (0.53s)

                                                
                                    
x
+
TestPause/serial/VerifyStatus (0.34s)

                                                
                                                
=== RUN   TestPause/serial/VerifyStatus
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p pause-20210609012512-9941 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p pause-20210609012512-9941 --output=json --layout=cluster: exit status 2 (334.890447ms)

                                                
                                                
-- stdout --
	{"Name":"pause-20210609012512-9941","StatusCode":418,"StatusName":"Paused","Step":"Done","StepDetail":"* Paused 14 containers in: kube-system, kubernetes-dashboard, storage-gluster, istio-operator","BinaryVersion":"v1.21.0-beta.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":200,"StatusName":"OK"}},"Nodes":[{"Name":"pause-20210609012512-9941","StatusCode":200,"StatusName":"OK","Components":{"apiserver":{"Name":"apiserver","StatusCode":418,"StatusName":"Paused"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
--- PASS: TestPause/serial/VerifyStatus (0.34s)

                                                
                                    
x
+
TestPause/serial/Unpause (0.57s)

                                                
                                                
=== RUN   TestPause/serial/Unpause
pause_test.go:118: (dbg) Run:  out/minikube-linux-amd64 unpause -p pause-20210609012512-9941 --alsologtostderr -v=5
--- PASS: TestPause/serial/Unpause (0.57s)

                                                
                                    
x
+
TestPause/serial/PauseAgain (0.78s)

                                                
                                                
=== RUN   TestPause/serial/PauseAgain
pause_test.go:107: (dbg) Run:  out/minikube-linux-amd64 pause -p pause-20210609012512-9941 --alsologtostderr -v=5
--- PASS: TestPause/serial/PauseAgain (0.78s)

                                                
                                    
x
+
TestPause/serial/DeletePaused (2.93s)

                                                
                                                
=== RUN   TestPause/serial/DeletePaused
pause_test.go:129: (dbg) Run:  out/minikube-linux-amd64 delete -p pause-20210609012512-9941 --alsologtostderr -v=5
pause_test.go:129: (dbg) Done: out/minikube-linux-amd64 delete -p pause-20210609012512-9941 --alsologtostderr -v=5: (2.925686576s)
--- PASS: TestPause/serial/DeletePaused (2.93s)

                                                
                                    
x
+
TestPause/serial/VerifyDeletedResources (0.89s)

                                                
                                                
=== RUN   TestPause/serial/VerifyDeletedResources
pause_test.go:139: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
pause_test.go:165: (dbg) Run:  docker ps -a
pause_test.go:170: (dbg) Run:  docker volume inspect pause-20210609012512-9941
pause_test.go:170: (dbg) Non-zero exit: docker volume inspect pause-20210609012512-9941: exit status 1 (40.671535ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error: No such volume: pause-20210609012512-9941

                                                
                                                
** /stderr **
--- PASS: TestPause/serial/VerifyDeletedResources (0.89s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (1.47s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:211: (dbg) Run:  out/minikube-linux-amd64 logs -p stopped-upgrade-20210609012720-9941

                                                
                                                
=== CONT  TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:211: (dbg) Done: out/minikube-linux-amd64 logs -p stopped-upgrade-20210609012720-9941: (1.467299372s)
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (1.47s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (315.58s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:159: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-20210609012901-9941 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.14.0

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:159: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-20210609012901-9941 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.14.0: (5m15.577682076s)
--- PASS: TestStartStop/group/old-k8s-version/serial/FirstStart (315.58s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (90.49s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:159: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-20210609012901-9941 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.22.0-alpha.2

                                                
                                                
=== CONT  TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:159: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-20210609012901-9941 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.22.0-alpha.2: (1m30.486857084s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (90.49s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (129.07s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:159: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-20210609012903-9941 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=docker --kubernetes-version=v1.20.7

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:159: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-20210609012903-9941 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=docker --kubernetes-version=v1.20.7: (2m9.071777033s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (129.07s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-different-port/serial/FirstStart (128.16s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-different-port/serial/FirstStart
start_stop_delete_test.go:159: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-different-port-20210609012935-9941 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=docker --kubernetes-version=v1.20.7
E0609 01:29:49.547068    9941 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/profiles/skaffold-20210609012352-9941/client.crt: no such file or directory
E0609 01:29:49.738689    9941 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/profiles/skaffold-20210609012352-9941/client.crt: no such file or directory
E0609 01:29:49.749017    9941 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/profiles/skaffold-20210609012352-9941/client.crt: no such file or directory
E0609 01:29:49.769262    9941 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/profiles/skaffold-20210609012352-9941/client.crt: no such file or directory
E0609 01:29:49.809565    9941 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/profiles/skaffold-20210609012352-9941/client.crt: no such file or directory
E0609 01:29:49.889908    9941 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/profiles/skaffold-20210609012352-9941/client.crt: no such file or directory
E0609 01:29:50.050987    9941 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/profiles/skaffold-20210609012352-9941/client.crt: no such file or directory
E0609 01:29:50.372109    9941 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/profiles/skaffold-20210609012352-9941/client.crt: no such file or directory
E0609 01:29:51.013139    9941 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/profiles/skaffold-20210609012352-9941/client.crt: no such file or directory
E0609 01:29:52.294232    9941 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/profiles/skaffold-20210609012352-9941/client.crt: no such file or directory
E0609 01:29:54.854788    9941 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/profiles/skaffold-20210609012352-9941/client.crt: no such file or directory
E0609 01:29:57.089717    9941 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/profiles/functional-20210609010438-9941/client.crt: no such file or directory
E0609 01:29:59.975989    9941 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/profiles/skaffold-20210609012352-9941/client.crt: no such file or directory
E0609 01:30:10.216679    9941 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/profiles/skaffold-20210609012352-9941/client.crt: no such file or directory
E0609 01:30:30.697280    9941 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/profiles/skaffold-20210609012352-9941/client.crt: no such file or directory

                                                
                                                
=== CONT  TestStartStop/group/default-k8s-different-port/serial/FirstStart
start_stop_delete_test.go:159: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-different-port-20210609012935-9941 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=docker --kubernetes-version=v1.20.7: (2m8.160391986s)
--- PASS: TestStartStop/group/default-k8s-different-port/serial/FirstStart (128.16s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (10.51s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:169: (dbg) Run:  kubectl --context no-preload-20210609012901-9941 create -f testdata/busybox.yaml
start_stop_delete_test.go:169: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:335: "busybox" [5f095dca-4ac4-4367-9723-36945a0b8e36] Pending
helpers_test.go:335: "busybox" [5f095dca-4ac4-4367-9723-36945a0b8e36] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:335: "busybox" [5f095dca-4ac4-4367-9723-36945a0b8e36] Running
start_stop_delete_test.go:169: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 10.015415234s
start_stop_delete_test.go:169: (dbg) Run:  kubectl --context no-preload-20210609012901-9941 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (10.51s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (0.68s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:178: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p no-preload-20210609012901-9941 --images=MetricsServer=k8s.gcr.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:188: (dbg) Run:  kubectl --context no-preload-20210609012901-9941 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (0.68s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (11.16s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:201: (dbg) Run:  out/minikube-linux-amd64 stop -p no-preload-20210609012901-9941 --alsologtostderr -v=3
E0609 01:30:43.576003    9941 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/profiles/addons-20210609005737-9941/client.crt: no such file or directory
start_stop_delete_test.go:201: (dbg) Done: out/minikube-linux-amd64 stop -p no-preload-20210609012901-9941 --alsologtostderr -v=3: (11.156543726s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (11.16s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.21s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:212: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-20210609012901-9941 -n no-preload-20210609012901-9941
start_stop_delete_test.go:212: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-20210609012901-9941 -n no-preload-20210609012901-9941: exit status 7 (101.809051ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:212: status error: exit status 7 (may be ok)
start_stop_delete_test.go:219: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p no-preload-20210609012901-9941 --images=MetricsScraper=k8s.gcr.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.21s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (343.7s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:229: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-20210609012901-9941 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.22.0-alpha.2
E0609 01:31:11.658294    9941 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/profiles/skaffold-20210609012352-9941/client.crt: no such file or directory

                                                
                                                
=== CONT  TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:229: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-20210609012901-9941 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.22.0-alpha.2: (5m43.354998453s)
start_stop_delete_test.go:235: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-20210609012901-9941 -n no-preload-20210609012901-9941
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (343.70s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (9.51s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:169: (dbg) Run:  kubectl --context embed-certs-20210609012903-9941 create -f testdata/busybox.yaml
start_stop_delete_test.go:169: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:335: "busybox" [2c5f5268-c5a2-4cd4-b74d-1bd9cc11d657] Pending
helpers_test.go:335: "busybox" [2c5f5268-c5a2-4cd4-b74d-1bd9cc11d657] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:335: "busybox" [2c5f5268-c5a2-4cd4-b74d-1bd9cc11d657] Running
start_stop_delete_test.go:169: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 9.011403031s
start_stop_delete_test.go:169: (dbg) Run:  kubectl --context embed-certs-20210609012903-9941 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (9.51s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.69s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:178: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-20210609012903-9941 --images=MetricsServer=k8s.gcr.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:188: (dbg) Run:  kubectl --context embed-certs-20210609012903-9941 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.69s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (11.03s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:201: (dbg) Run:  out/minikube-linux-amd64 stop -p embed-certs-20210609012903-9941 --alsologtostderr -v=3
start_stop_delete_test.go:201: (dbg) Done: out/minikube-linux-amd64 stop -p embed-certs-20210609012903-9941 --alsologtostderr -v=3: (11.033073287s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (11.03s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.22s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:212: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-20210609012903-9941 -n embed-certs-20210609012903-9941
start_stop_delete_test.go:212: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-20210609012903-9941 -n embed-certs-20210609012903-9941: exit status 7 (103.408738ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:212: status error: exit status 7 (may be ok)
start_stop_delete_test.go:219: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p embed-certs-20210609012903-9941 --images=MetricsScraper=k8s.gcr.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.22s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (381.41s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:229: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-20210609012903-9941 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=docker --kubernetes-version=v1.20.7

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:229: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-20210609012903-9941 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=docker --kubernetes-version=v1.20.7: (6m21.079966819s)
start_stop_delete_test.go:235: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-20210609012903-9941 -n embed-certs-20210609012903-9941
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (381.41s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-different-port/serial/DeployApp (11.56s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-different-port/serial/DeployApp
start_stop_delete_test.go:169: (dbg) Run:  kubectl --context default-k8s-different-port-20210609012935-9941 create -f testdata/busybox.yaml
start_stop_delete_test.go:169: (dbg) TestStartStop/group/default-k8s-different-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:335: "busybox" [aafc4b4a-956e-4221-bb89-0d748583afed] Pending
helpers_test.go:335: "busybox" [aafc4b4a-956e-4221-bb89-0d748583afed] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:335: "busybox" [aafc4b4a-956e-4221-bb89-0d748583afed] Running
start_stop_delete_test.go:169: (dbg) TestStartStop/group/default-k8s-different-port/serial/DeployApp: integration-test=busybox healthy within 11.014553662s
start_stop_delete_test.go:169: (dbg) Run:  kubectl --context default-k8s-different-port-20210609012935-9941 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-different-port/serial/DeployApp (11.56s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-different-port/serial/EnableAddonWhileActive (0.66s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-different-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:178: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-different-port-20210609012935-9941 --images=MetricsServer=k8s.gcr.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:188: (dbg) Run:  kubectl --context default-k8s-different-port-20210609012935-9941 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-different-port/serial/EnableAddonWhileActive (0.66s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-different-port/serial/Stop (11.12s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-different-port/serial/Stop
start_stop_delete_test.go:201: (dbg) Run:  out/minikube-linux-amd64 stop -p default-k8s-different-port-20210609012935-9941 --alsologtostderr -v=3
start_stop_delete_test.go:201: (dbg) Done: out/minikube-linux-amd64 stop -p default-k8s-different-port-20210609012935-9941 --alsologtostderr -v=3: (11.120679265s)
--- PASS: TestStartStop/group/default-k8s-different-port/serial/Stop (11.12s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-different-port/serial/EnableAddonAfterStop (0.2s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-different-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:212: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-different-port-20210609012935-9941 -n default-k8s-different-port-20210609012935-9941
start_stop_delete_test.go:212: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-different-port-20210609012935-9941 -n default-k8s-different-port-20210609012935-9941: exit status 7 (97.080718ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:212: status error: exit status 7 (may be ok)
start_stop_delete_test.go:219: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p default-k8s-different-port-20210609012935-9941 --images=MetricsScraper=k8s.gcr.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-different-port/serial/EnableAddonAfterStop (0.20s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-different-port/serial/SecondStart (492.62s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-different-port/serial/SecondStart
start_stop_delete_test.go:229: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-different-port-20210609012935-9941 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=docker --kubernetes-version=v1.20.7
E0609 01:32:33.578619    9941 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/profiles/skaffold-20210609012352-9941/client.crt: no such file or directory
E0609 01:33:34.046465    9941 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/profiles/functional-20210609010438-9941/client.crt: no such file or directory

                                                
                                                
=== CONT  TestStartStop/group/default-k8s-different-port/serial/SecondStart
start_stop_delete_test.go:229: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-different-port-20210609012935-9941 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=docker --kubernetes-version=v1.20.7: (8m12.265322534s)
start_stop_delete_test.go:235: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-different-port-20210609012935-9941 -n default-k8s-different-port-20210609012935-9941
--- PASS: TestStartStop/group/default-k8s-different-port/serial/SecondStart (492.62s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (9.55s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:169: (dbg) Run:  kubectl --context old-k8s-version-20210609012901-9941 create -f testdata/busybox.yaml
start_stop_delete_test.go:169: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:335: "busybox" [ce27d667-c8c2-11eb-ba72-0242feccb8e4] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:335: "busybox" [ce27d667-c8c2-11eb-ba72-0242feccb8e4] Running
start_stop_delete_test.go:169: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: integration-test=busybox healthy within 9.010587701s
start_stop_delete_test.go:169: (dbg) Run:  kubectl --context old-k8s-version-20210609012901-9941 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/old-k8s-version/serial/DeployApp (9.55s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (0.7s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:178: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-20210609012901-9941 --images=MetricsServer=k8s.gcr.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:188: (dbg) Run:  kubectl --context old-k8s-version-20210609012901-9941 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (0.70s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (11.11s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:201: (dbg) Run:  out/minikube-linux-amd64 stop -p old-k8s-version-20210609012901-9941 --alsologtostderr -v=3
start_stop_delete_test.go:201: (dbg) Done: out/minikube-linux-amd64 stop -p old-k8s-version-20210609012901-9941 --alsologtostderr -v=3: (11.114224997s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (11.11s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.21s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:212: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-20210609012901-9941 -n old-k8s-version-20210609012901-9941
start_stop_delete_test.go:212: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-20210609012901-9941 -n old-k8s-version-20210609012901-9941: exit status 7 (102.883604ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:212: status error: exit status 7 (may be ok)
start_stop_delete_test.go:219: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p old-k8s-version-20210609012901-9941 --images=MetricsScraper=k8s.gcr.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.21s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (430.23s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:229: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-20210609012901-9941 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.14.0
E0609 01:34:49.547284    9941 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/profiles/skaffold-20210609012352-9941/client.crt: no such file or directory
E0609 01:35:17.419613    9941 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/profiles/skaffold-20210609012352-9941/client.crt: no such file or directory
E0609 01:35:43.573805    9941 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/profiles/addons-20210609005737-9941/client.crt: no such file or directory

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:229: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-20210609012901-9941 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=docker --kubernetes-version=v1.14.0: (7m9.8825144s)
start_stop_delete_test.go:235: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-20210609012901-9941 -n old-k8s-version-20210609012901-9941
--- PASS: TestStartStop/group/old-k8s-version/serial/SecondStart (430.23s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (5.01s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:247: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:335: "kubernetes-dashboard-6fcdf4f6d-5t72b" [783bcfb4-9d87-4b7b-a255-3a42d8a79b4e] Running
start_stop_delete_test.go:247: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.010840407s
--- PASS: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (5.01s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.09s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:260: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:335: "kubernetes-dashboard-6fcdf4f6d-5t72b" [783bcfb4-9d87-4b7b-a255-3a42d8a79b4e] Running
start_stop_delete_test.go:260: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.006427007s
start_stop_delete_test.go:264: (dbg) Run:  kubectl --context no-preload-20210609012901-9941 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.09s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.3s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:277: (dbg) Run:  out/minikube-linux-amd64 ssh -p no-preload-20210609012901-9941 "sudo crictl images -o json"
start_stop_delete_test.go:277: Found non-minikube image: busybox:1.28.4-glibc
start_stop_delete_test.go:277: Found non-minikube image: minikube-local-cache-test:functional-20210609010438-9941
--- PASS: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.30s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (2.76s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:284: (dbg) Run:  out/minikube-linux-amd64 pause -p no-preload-20210609012901-9941 --alsologtostderr -v=1
start_stop_delete_test.go:284: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-20210609012901-9941 -n no-preload-20210609012901-9941
start_stop_delete_test.go:284: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-20210609012901-9941 -n no-preload-20210609012901-9941: exit status 2 (320.972995ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:284: status error: exit status 2 (may be ok)
start_stop_delete_test.go:284: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-20210609012901-9941 -n no-preload-20210609012901-9941
start_stop_delete_test.go:284: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-20210609012901-9941 -n no-preload-20210609012901-9941: exit status 2 (327.30939ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:284: status error: exit status 2 (may be ok)
start_stop_delete_test.go:284: (dbg) Run:  out/minikube-linux-amd64 unpause -p no-preload-20210609012901-9941 --alsologtostderr -v=1
start_stop_delete_test.go:284: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-20210609012901-9941 -n no-preload-20210609012901-9941
start_stop_delete_test.go:284: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-20210609012901-9941 -n no-preload-20210609012901-9941
--- PASS: TestStartStop/group/no-preload/serial/Pause (2.76s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (41.65s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:159: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-20210609013655-9941 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubelet.network-plugin=cni --extra-config=kubeadm.pod-network-cidr=192.168.111.111/16 --driver=docker  --container-runtime=docker --kubernetes-version=v1.22.0-alpha.2
start_stop_delete_test.go:159: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-20210609013655-9941 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubelet.network-plugin=cni --extra-config=kubeadm.pod-network-cidr=192.168.111.111/16 --driver=docker  --container-runtime=docker --kubernetes-version=v1.22.0-alpha.2: (41.653178369s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (41.65s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (0.64s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:178: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-20210609013655-9941 --images=MetricsServer=k8s.gcr.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:184: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (0.64s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (11.16s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:201: (dbg) Run:  out/minikube-linux-amd64 stop -p newest-cni-20210609013655-9941 --alsologtostderr -v=3
start_stop_delete_test.go:201: (dbg) Done: out/minikube-linux-amd64 stop -p newest-cni-20210609013655-9941 --alsologtostderr -v=3: (11.158623346s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (11.16s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.21s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:212: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-20210609013655-9941 -n newest-cni-20210609013655-9941
start_stop_delete_test.go:212: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-20210609013655-9941 -n newest-cni-20210609013655-9941: exit status 7 (102.946379ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:212: status error: exit status 7 (may be ok)
start_stop_delete_test.go:219: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p newest-cni-20210609013655-9941 --images=MetricsScraper=k8s.gcr.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.21s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (19.01s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:229: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-20210609013655-9941 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubelet.network-plugin=cni --extra-config=kubeadm.pod-network-cidr=192.168.111.111/16 --driver=docker  --container-runtime=docker --kubernetes-version=v1.22.0-alpha.2

                                                
                                                
=== CONT  TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:229: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-20210609013655-9941 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubelet.network-plugin=cni --extra-config=kubeadm.pod-network-cidr=192.168.111.111/16 --driver=docker  --container-runtime=docker --kubernetes-version=v1.22.0-alpha.2: (18.596346538s)
start_stop_delete_test.go:235: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-20210609013655-9941 -n newest-cni-20210609013655-9941
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (19.01s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (5.02s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:247: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:335: "kubernetes-dashboard-968bcb79-vp9s9" [487006b9-f897-47fb-8d4b-4ae7e08755ea] Running
start_stop_delete_test.go:247: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.012301831s
--- PASS: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (5.02s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.11s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:260: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:335: "kubernetes-dashboard-968bcb79-vp9s9" [487006b9-f897-47fb-8d4b-4ae7e08755ea] Running
start_stop_delete_test.go:260: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.007755717s
start_stop_delete_test.go:264: (dbg) Run:  kubectl --context embed-certs-20210609012903-9941 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.11s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.36s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:277: (dbg) Run:  out/minikube-linux-amd64 ssh -p embed-certs-20210609012903-9941 "sudo crictl images -o json"
start_stop_delete_test.go:277: Found non-minikube image: busybox:1.28.4-glibc
start_stop_delete_test.go:277: Found non-minikube image: minikube-local-cache-test:functional-20210609010438-9941
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.36s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (2.94s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:284: (dbg) Run:  out/minikube-linux-amd64 pause -p embed-certs-20210609012903-9941 --alsologtostderr -v=1
start_stop_delete_test.go:284: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-20210609012903-9941 -n embed-certs-20210609012903-9941
start_stop_delete_test.go:284: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-20210609012903-9941 -n embed-certs-20210609012903-9941: exit status 2 (342.745456ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:284: status error: exit status 2 (may be ok)
start_stop_delete_test.go:284: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-20210609012903-9941 -n embed-certs-20210609012903-9941
start_stop_delete_test.go:284: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-20210609012903-9941 -n embed-certs-20210609012903-9941: exit status 2 (364.74945ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:284: status error: exit status 2 (may be ok)
start_stop_delete_test.go:284: (dbg) Run:  out/minikube-linux-amd64 unpause -p embed-certs-20210609012903-9941 --alsologtostderr -v=1

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:284: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-20210609012903-9941 -n embed-certs-20210609012903-9941

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:284: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-20210609012903-9941 -n embed-certs-20210609012903-9941
--- PASS: TestStartStop/group/embed-certs/serial/Pause (2.94s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:246: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:257: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.34s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:277: (dbg) Run:  out/minikube-linux-amd64 ssh -p newest-cni-20210609013655-9941 "sudo crictl images -o json"

                                                
                                                
=== CONT  TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:277: Found non-minikube image: minikube-local-cache-test:functional-20210609010438-9941
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.34s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (3.19s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:284: (dbg) Run:  out/minikube-linux-amd64 pause -p newest-cni-20210609013655-9941 --alsologtostderr -v=1

                                                
                                                
=== CONT  TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:284: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-20210609013655-9941 -n newest-cni-20210609013655-9941
start_stop_delete_test.go:284: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-20210609013655-9941 -n newest-cni-20210609013655-9941: exit status 2 (351.460435ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:284: status error: exit status 2 (may be ok)
start_stop_delete_test.go:284: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-20210609013655-9941 -n newest-cni-20210609013655-9941
start_stop_delete_test.go:284: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-20210609013655-9941 -n newest-cni-20210609013655-9941: exit status 2 (676.632919ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:284: status error: exit status 2 (may be ok)
start_stop_delete_test.go:284: (dbg) Run:  out/minikube-linux-amd64 unpause -p newest-cni-20210609013655-9941 --alsologtostderr -v=1
start_stop_delete_test.go:284: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-20210609013655-9941 -n newest-cni-20210609013655-9941
start_stop_delete_test.go:284: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-20210609013655-9941 -n newest-cni-20210609013655-9941
--- PASS: TestStartStop/group/newest-cni/serial/Pause (3.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (327.56s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p auto-20210609012809-9941 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --driver=docker  --container-runtime=docker

                                                
                                                
=== CONT  TestNetworkPlugins/group/auto/Start
net_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p auto-20210609012809-9941 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --driver=docker  --container-runtime=docker: (5m27.556393117s)
--- PASS: TestNetworkPlugins/group/auto/Start (327.56s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/Start (97.33s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/Start
net_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p false-20210609012810-9941 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=false --driver=docker  --container-runtime=docker
E0609 01:38:34.046683    9941 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/profiles/functional-20210609010438-9941/client.crt: no such file or directory
E0609 01:38:46.619206    9941 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/profiles/addons-20210609005737-9941/client.crt: no such file or directory
E0609 01:39:49.546861    9941 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/profiles/skaffold-20210609012352-9941/client.crt: no such file or directory
net_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p false-20210609012810-9941 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=false --driver=docker  --container-runtime=docker: (1m37.328375933s)
--- PASS: TestNetworkPlugins/group/false/Start (97.33s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/KubeletFlags (0.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/KubeletFlags
net_test.go:119: (dbg) Run:  out/minikube-linux-amd64 ssh -p false-20210609012810-9941 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/false/KubeletFlags (0.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/NetCatPod (9.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/NetCatPod
net_test.go:131: (dbg) Run:  kubectl --context false-20210609012810-9941 replace --force -f testdata/netcat-deployment.yaml
net_test.go:145: (dbg) TestNetworkPlugins/group/false/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:335: "netcat-66fbc655d5-mdcfq" [98dc326e-f155-4e2d-8199-08f57d619d91] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:335: "netcat-66fbc655d5-mdcfq" [98dc326e-f155-4e2d-8199-08f57d619d91] Running
net_test.go:145: (dbg) TestNetworkPlugins/group/false/NetCatPod: app=netcat healthy within 9.004972743s
--- PASS: TestNetworkPlugins/group/false/NetCatPod (9.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/DNS (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/DNS
net_test.go:162: (dbg) Run:  kubectl --context false-20210609012810-9941 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/false/DNS (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/Localhost (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/Localhost
net_test.go:181: (dbg) Run:  kubectl --context false-20210609012810-9941 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/false/Localhost (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/HairPin (5.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/HairPin
net_test.go:231: (dbg) Run:  kubectl --context false-20210609012810-9941 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
net_test.go:231: (dbg) Non-zero exit: kubectl --context false-20210609012810-9941 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080": exit status 1 (5.168389276s)

                                                
                                                
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
--- PASS: TestNetworkPlugins/group/false/HairPin (5.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium/Start (129.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium/Start
net_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p cilium-20210609012810-9941 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=cilium --driver=docker  --container-runtime=docker

                                                
                                                
=== CONT  TestNetworkPlugins/group/cilium/Start
net_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p cilium-20210609012810-9941 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=cilium --driver=docker  --container-runtime=docker: (2m9.198618973s)
--- PASS: TestNetworkPlugins/group/cilium/Start (129.20s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-different-port/serial/UserAppExistsAfterStop (5.02s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-different-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:247: (dbg) TestStartStop/group/default-k8s-different-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:335: "kubernetes-dashboard-968bcb79-f7s77" [a2e989c6-99c7-49ea-8203-c7d179c3f527] Running
start_stop_delete_test.go:247: (dbg) TestStartStop/group/default-k8s-different-port/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.012934634s
--- PASS: TestStartStop/group/default-k8s-different-port/serial/UserAppExistsAfterStop (5.02s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop (5.09s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:260: (dbg) TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:335: "kubernetes-dashboard-968bcb79-f7s77" [a2e989c6-99c7-49ea-8203-c7d179c3f527] Running
start_stop_delete_test.go:260: (dbg) TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.006027055s
start_stop_delete_test.go:264: (dbg) Run:  kubectl --context default-k8s-different-port-20210609012935-9941 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop (5.09s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-different-port/serial/VerifyKubernetesImages (0.3s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-different-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:277: (dbg) Run:  out/minikube-linux-amd64 ssh -p default-k8s-different-port-20210609012935-9941 "sudo crictl images -o json"
start_stop_delete_test.go:277: Found non-minikube image: busybox:1.28.4-glibc
start_stop_delete_test.go:277: Found non-minikube image: minikube-local-cache-test:functional-20210609010438-9941
--- PASS: TestStartStop/group/default-k8s-different-port/serial/VerifyKubernetesImages (0.30s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-different-port/serial/Pause (2.72s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-different-port/serial/Pause
start_stop_delete_test.go:284: (dbg) Run:  out/minikube-linux-amd64 pause -p default-k8s-different-port-20210609012935-9941 --alsologtostderr -v=1
start_stop_delete_test.go:284: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-different-port-20210609012935-9941 -n default-k8s-different-port-20210609012935-9941
start_stop_delete_test.go:284: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-different-port-20210609012935-9941 -n default-k8s-different-port-20210609012935-9941: exit status 2 (322.33174ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:284: status error: exit status 2 (may be ok)
start_stop_delete_test.go:284: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-different-port-20210609012935-9941 -n default-k8s-different-port-20210609012935-9941
start_stop_delete_test.go:284: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-different-port-20210609012935-9941 -n default-k8s-different-port-20210609012935-9941: exit status 2 (340.984232ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:284: status error: exit status 2 (may be ok)
start_stop_delete_test.go:284: (dbg) Run:  out/minikube-linux-amd64 unpause -p default-k8s-different-port-20210609012935-9941 --alsologtostderr -v=1
start_stop_delete_test.go:284: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-different-port-20210609012935-9941 -n default-k8s-different-port-20210609012935-9941
E0609 01:40:32.405647    9941 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/profiles/no-preload-20210609012901-9941/client.crt: no such file or directory
E0609 01:40:32.410899    9941 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/profiles/no-preload-20210609012901-9941/client.crt: no such file or directory
E0609 01:40:32.421135    9941 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/profiles/no-preload-20210609012901-9941/client.crt: no such file or directory
E0609 01:40:32.441379    9941 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/profiles/no-preload-20210609012901-9941/client.crt: no such file or directory
start_stop_delete_test.go:284: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-different-port-20210609012935-9941 -n default-k8s-different-port-20210609012935-9941
E0609 01:40:32.482371    9941 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/profiles/no-preload-20210609012901-9941/client.crt: no such file or directory
E0609 01:40:32.562858    9941 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/profiles/no-preload-20210609012901-9941/client.crt: no such file or directory
E0609 01:40:32.723345    9941 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/profiles/no-preload-20210609012901-9941/client.crt: no such file or directory
--- PASS: TestStartStop/group/default-k8s-different-port/serial/Pause (2.72s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (126.1s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p calico-20210609012810-9941 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=calico --driver=docker  --container-runtime=docker
E0609 01:40:37.526874    9941 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/profiles/no-preload-20210609012901-9941/client.crt: no such file or directory
E0609 01:40:42.647416    9941 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/profiles/no-preload-20210609012901-9941/client.crt: no such file or directory
E0609 01:40:43.573714    9941 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/profiles/addons-20210609005737-9941/client.crt: no such file or directory
E0609 01:40:52.889077    9941 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/profiles/no-preload-20210609012901-9941/client.crt: no such file or directory
E0609 01:41:13.370085    9941 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/profiles/no-preload-20210609012901-9941/client.crt: no such file or directory
E0609 01:41:44.016523    9941 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/profiles/default-k8s-different-port-20210609012935-9941/client.crt: no such file or directory
E0609 01:41:44.021803    9941 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/profiles/default-k8s-different-port-20210609012935-9941/client.crt: no such file or directory
E0609 01:41:44.032336    9941 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/profiles/default-k8s-different-port-20210609012935-9941/client.crt: no such file or directory
E0609 01:41:44.052648    9941 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/profiles/default-k8s-different-port-20210609012935-9941/client.crt: no such file or directory
E0609 01:41:44.092980    9941 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/profiles/default-k8s-different-port-20210609012935-9941/client.crt: no such file or directory
E0609 01:41:44.173308    9941 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/profiles/default-k8s-different-port-20210609012935-9941/client.crt: no such file or directory
E0609 01:41:44.333463    9941 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/profiles/default-k8s-different-port-20210609012935-9941/client.crt: no such file or directory
E0609 01:41:44.654238    9941 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/profiles/default-k8s-different-port-20210609012935-9941/client.crt: no such file or directory
E0609 01:41:45.294623    9941 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/profiles/default-k8s-different-port-20210609012935-9941/client.crt: no such file or directory
E0609 01:41:46.575657    9941 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/profiles/default-k8s-different-port-20210609012935-9941/client.crt: no such file or directory

                                                
                                                
=== CONT  TestNetworkPlugins/group/calico/Start
net_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p calico-20210609012810-9941 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=calico --driver=docker  --container-runtime=docker: (2m6.101406947s)
--- PASS: TestNetworkPlugins/group/calico/Start (126.10s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (5.01s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:247: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:335: "kubernetes-dashboard-5d8978d65d-5c7t7" [af9cc114-c8c3-11eb-a78f-02427f02d9a2] Running
E0609 01:41:49.135942    9941 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/profiles/default-k8s-different-port-20210609012935-9941/client.crt: no such file or directory
start_stop_delete_test.go:247: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.010707781s
--- PASS: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (5.01s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.09s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:260: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:335: "kubernetes-dashboard-5d8978d65d-5c7t7" [af9cc114-c8c3-11eb-a78f-02427f02d9a2] Running
E0609 01:41:54.256901    9941 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/profiles/default-k8s-different-port-20210609012935-9941/client.crt: no such file or directory
E0609 01:41:54.331213    9941 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/profiles/no-preload-20210609012901-9941/client.crt: no such file or directory
start_stop_delete_test.go:260: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.00516241s
start_stop_delete_test.go:264: (dbg) Run:  kubectl --context old-k8s-version-20210609012901-9941 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.09s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.31s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:277: (dbg) Run:  out/minikube-linux-amd64 ssh -p old-k8s-version-20210609012901-9941 "sudo crictl images -o json"
start_stop_delete_test.go:277: Found non-minikube image: busybox:1.28.4-glibc
start_stop_delete_test.go:277: Found non-minikube image: minikube-local-cache-test:functional-20210609010438-9941
--- PASS: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.31s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-weave/Start (122.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-weave/Start
net_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p custom-weave-20210609012810-9941 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=testdata/weavenet.yaml --driver=docker  --container-runtime=docker

                                                
                                                
=== CONT  TestNetworkPlugins/group/custom-weave/Start
net_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p custom-weave-20210609012810-9941 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=testdata/weavenet.yaml --driver=docker  --container-runtime=docker: (2m2.209102131s)
--- PASS: TestNetworkPlugins/group/custom-weave/Start (122.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium/ControllerPod (5.02s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium/ControllerPod
net_test.go:106: (dbg) TestNetworkPlugins/group/cilium/ControllerPod: waiting 10m0s for pods matching "k8s-app=cilium" in namespace "kube-system" ...
helpers_test.go:335: "cilium-2rdhk" [d04f148b-d021-43b6-a522-4cd4ec70a18f] Running
net_test.go:106: (dbg) TestNetworkPlugins/group/cilium/ControllerPod: k8s-app=cilium healthy within 5.014462911s
--- PASS: TestNetworkPlugins/group/cilium/ControllerPod (5.02s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium/KubeletFlags (0.31s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium/KubeletFlags
net_test.go:119: (dbg) Run:  out/minikube-linux-amd64 ssh -p cilium-20210609012810-9941 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/cilium/KubeletFlags (0.31s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium/NetCatPod (10.41s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium/NetCatPod
net_test.go:131: (dbg) Run:  kubectl --context cilium-20210609012810-9941 replace --force -f testdata/netcat-deployment.yaml
E0609 01:42:24.977417    9941 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/profiles/default-k8s-different-port-20210609012935-9941/client.crt: no such file or directory
net_test.go:145: (dbg) TestNetworkPlugins/group/cilium/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:335: "netcat-66fbc655d5-7lw4s" [f45436f1-f9c4-45ca-87b5-576972e88582] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:335: "netcat-66fbc655d5-7lw4s" [f45436f1-f9c4-45ca-87b5-576972e88582] Running
net_test.go:145: (dbg) TestNetworkPlugins/group/cilium/NetCatPod: app=netcat healthy within 10.005871849s
--- PASS: TestNetworkPlugins/group/cilium/NetCatPod (10.41s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium/DNS (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium/DNS
net_test.go:162: (dbg) Run:  kubectl --context cilium-20210609012810-9941 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/cilium/DNS (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium/Localhost (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium/Localhost
net_test.go:181: (dbg) Run:  kubectl --context cilium-20210609012810-9941 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/cilium/Localhost (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium/HairPin (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium/HairPin
net_test.go:231: (dbg) Run:  kubectl --context cilium-20210609012810-9941 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/cilium/HairPin (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (146.03s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p enable-default-cni-20210609012809-9941 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --enable-default-cni=true --driver=docker  --container-runtime=docker

                                                
                                                
=== CONT  TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p enable-default-cni-20210609012809-9941 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --enable-default-cni=true --driver=docker  --container-runtime=docker: (2m26.026838198s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (146.03s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (5.02s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:106: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:335: "calico-node-8bhjk" [c712c6b3-a641-4615-b483-4eacfb3a9945] Running
net_test.go:106: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 5.015008235s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (5.02s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.34s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:119: (dbg) Run:  out/minikube-linux-amd64 ssh -p calico-20210609012810-9941 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.34s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (14.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:131: (dbg) Run:  kubectl --context calico-20210609012810-9941 replace --force -f testdata/netcat-deployment.yaml
net_test.go:131: (dbg) Done: kubectl --context calico-20210609012810-9941 replace --force -f testdata/netcat-deployment.yaml: (5.0682356s)
net_test.go:145: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:335: "netcat-66fbc655d5-klfxr" [4ba28164-772d-47fc-b991-c39c445f2c6d] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:335: "netcat-66fbc655d5-klfxr" [4ba28164-772d-47fc-b991-c39c445f2c6d] Running
net_test.go:145: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 9.006119411s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (14.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:162: (dbg) Run:  kubectl --context calico-20210609012810-9941 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:181: (dbg) Run:  kubectl --context calico-20210609012810-9941 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:231: (dbg) Run:  kubectl --context calico-20210609012810-9941 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (121.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p kindnet-20210609012810-9941 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=kindnet --driver=docker  --container-runtime=docker
E0609 01:43:16.252190    9941 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/profiles/no-preload-20210609012901-9941/client.crt: no such file or directory
E0609 01:43:34.046754    9941 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/profiles/functional-20210609010438-9941/client.crt: no such file or directory

                                                
                                                
=== CONT  TestNetworkPlugins/group/kindnet/Start
net_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p kindnet-20210609012810-9941 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=kindnet --driver=docker  --container-runtime=docker: (2m1.293317937s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (121.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:119: (dbg) Run:  out/minikube-linux-amd64 ssh -p auto-20210609012809-9941 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (14.43s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:131: (dbg) Run:  kubectl --context auto-20210609012809-9941 replace --force -f testdata/netcat-deployment.yaml
net_test.go:145: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:335: "netcat-66fbc655d5-txgsb" [c51345bd-983a-4082-90ef-774dbb694063] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:335: "netcat-66fbc655d5-txgsb" [c51345bd-983a-4082-90ef-774dbb694063] Running
net_test.go:145: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 14.036291056s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (14.43s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:162: (dbg) Run:  kubectl --context auto-20210609012809-9941 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:181: (dbg) Run:  kubectl --context auto-20210609012809-9941 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (5.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:231: (dbg) Run:  kubectl --context auto-20210609012809-9941 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
net_test.go:231: (dbg) Non-zero exit: kubectl --context auto-20210609012809-9941 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080": exit status 1 (5.141890865s)

                                                
                                                
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
--- PASS: TestNetworkPlugins/group/auto/HairPin (5.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (100.99s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p bridge-20210609012809-9941 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=bridge --driver=docker  --container-runtime=docker

                                                
                                                
=== CONT  TestNetworkPlugins/group/bridge/Start
net_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p bridge-20210609012809-9941 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=bridge --driver=docker  --container-runtime=docker: (1m40.989138116s)
--- PASS: TestNetworkPlugins/group/bridge/Start (100.99s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-weave/KubeletFlags (0.32s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-weave/KubeletFlags
net_test.go:119: (dbg) Run:  out/minikube-linux-amd64 ssh -p custom-weave-20210609012810-9941 "pgrep -a kubelet"
E0609 01:44:17.298602    9941 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/profiles/old-k8s-version-20210609012901-9941/client.crt: no such file or directory
E0609 01:44:17.303866    9941 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/profiles/old-k8s-version-20210609012901-9941/client.crt: no such file or directory
E0609 01:44:17.314095    9941 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/profiles/old-k8s-version-20210609012901-9941/client.crt: no such file or directory
E0609 01:44:17.334373    9941 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/profiles/old-k8s-version-20210609012901-9941/client.crt: no such file or directory
E0609 01:44:17.374766    9941 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/profiles/old-k8s-version-20210609012901-9941/client.crt: no such file or directory
E0609 01:44:17.455134    9941 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/profiles/old-k8s-version-20210609012901-9941/client.crt: no such file or directory
--- PASS: TestNetworkPlugins/group/custom-weave/KubeletFlags (0.32s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-weave/NetCatPod (10.42s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-weave/NetCatPod
net_test.go:131: (dbg) Run:  kubectl --context custom-weave-20210609012810-9941 replace --force -f testdata/netcat-deployment.yaml
E0609 01:44:17.615843    9941 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/profiles/old-k8s-version-20210609012901-9941/client.crt: no such file or directory
E0609 01:44:17.936163    9941 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/profiles/old-k8s-version-20210609012901-9941/client.crt: no such file or directory
net_test.go:145: (dbg) TestNetworkPlugins/group/custom-weave/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:335: "netcat-66fbc655d5-7l8sh" [770cd78e-ad96-429b-a4a5-f640304c5e64] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0609 01:44:18.576597    9941 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/profiles/old-k8s-version-20210609012901-9941/client.crt: no such file or directory
E0609 01:44:19.857430    9941 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/profiles/old-k8s-version-20210609012901-9941/client.crt: no such file or directory
helpers_test.go:335: "netcat-66fbc655d5-7l8sh" [770cd78e-ad96-429b-a4a5-f640304c5e64] Running
E0609 01:44:22.417964    9941 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/profiles/old-k8s-version-20210609012901-9941/client.crt: no such file or directory
E0609 01:44:27.538715    9941 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/profiles/old-k8s-version-20210609012901-9941/client.crt: no such file or directory
E0609 01:44:27.859453    9941 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/profiles/default-k8s-different-port-20210609012935-9941/client.crt: no such file or directory
net_test.go:145: (dbg) TestNetworkPlugins/group/custom-weave/NetCatPod: app=netcat healthy within 10.006450796s
--- PASS: TestNetworkPlugins/group/custom-weave/NetCatPod (10.42s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/Start (106.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/Start
net_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p kubenet-20210609012809-9941 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --network-plugin=kubenet --driver=docker  --container-runtime=docker
E0609 01:44:37.779459    9941 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/profiles/old-k8s-version-20210609012901-9941/client.crt: no such file or directory
E0609 01:44:49.547462    9941 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/profiles/skaffold-20210609012352-9941/client.crt: no such file or directory
E0609 01:44:52.746424    9941 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/profiles/false-20210609012810-9941/client.crt: no such file or directory
E0609 01:44:52.751669    9941 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/profiles/false-20210609012810-9941/client.crt: no such file or directory
E0609 01:44:52.761906    9941 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/profiles/false-20210609012810-9941/client.crt: no such file or directory
E0609 01:44:52.782177    9941 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/profiles/false-20210609012810-9941/client.crt: no such file or directory
E0609 01:44:52.822527    9941 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/profiles/false-20210609012810-9941/client.crt: no such file or directory
E0609 01:44:52.902750    9941 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/profiles/false-20210609012810-9941/client.crt: no such file or directory
E0609 01:44:53.063659    9941 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/profiles/false-20210609012810-9941/client.crt: no such file or directory
E0609 01:44:53.383998    9941 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/profiles/false-20210609012810-9941/client.crt: no such file or directory
E0609 01:44:54.024553    9941 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/profiles/false-20210609012810-9941/client.crt: no such file or directory
E0609 01:44:55.305491    9941 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/profiles/false-20210609012810-9941/client.crt: no such file or directory
E0609 01:44:57.866256    9941 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/profiles/false-20210609012810-9941/client.crt: no such file or directory
E0609 01:44:58.260303    9941 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/profiles/old-k8s-version-20210609012901-9941/client.crt: no such file or directory
E0609 01:45:02.986511    9941 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/profiles/false-20210609012810-9941/client.crt: no such file or directory

                                                
                                                
=== CONT  TestNetworkPlugins/group/kubenet/Start
net_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p kubenet-20210609012809-9941 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --network-plugin=kubenet --driver=docker  --container-runtime=docker: (1m46.193571365s)
--- PASS: TestNetworkPlugins/group/kubenet/Start (106.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:119: (dbg) Run:  out/minikube-linux-amd64 ssh -p enable-default-cni-20210609012809-9941 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (12.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:131: (dbg) Run:  kubectl --context enable-default-cni-20210609012809-9941 replace --force -f testdata/netcat-deployment.yaml
net_test.go:145: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:335: "netcat-66fbc655d5-bcx6k" [17a1fc2a-b86b-4811-86d4-40286df7deed] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])

                                                
                                                
=== CONT  TestNetworkPlugins/group/enable-default-cni/NetCatPod
helpers_test.go:335: "netcat-66fbc655d5-bcx6k" [17a1fc2a-b86b-4811-86d4-40286df7deed] Running

                                                
                                                
=== CONT  TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:145: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 12.008609588s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (12.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (5.59s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:106: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:335: "kindnet-nr58d" [49f38c51-5b1c-4a5a-80a3-9ff3e81017e0] Running

                                                
                                                
=== CONT  TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:106: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 5.589909024s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (5.59s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.32s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:119: (dbg) Run:  out/minikube-linux-amd64 ssh -p kindnet-20210609012810-9941 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.32s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (9.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:131: (dbg) Run:  kubectl --context kindnet-20210609012810-9941 replace --force -f testdata/netcat-deployment.yaml
E0609 01:45:13.227070    9941 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-docker-11610-6221-64a41824c53cd396e29af8e40a1e5ab125aa9bf4/.minikube/profiles/false-20210609012810-9941/client.crt: no such file or directory
net_test.go:145: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:335: "netcat-66fbc655d5-5wxvc" [b244d58e-4a72-43d3-9bbc-c6a11d8da9a1] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:335: "netcat-66fbc655d5-5wxvc" [b244d58e-4a72-43d3-9bbc-c6a11d8da9a1] Running

                                                
                                                
=== CONT  TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:145: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 9.00536756s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (9.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:162: (dbg) Run:  kubectl --context enable-default-cni-20210609012809-9941 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:181: (dbg) Run:  kubectl --context enable-default-cni-20210609012809-9941 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:231: (dbg) Run:  kubectl --context enable-default-cni-20210609012809-9941 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:162: (dbg) Run:  kubectl --context kindnet-20210609012810-9941 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:181: (dbg) Run:  kubectl --context kindnet-20210609012810-9941 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:231: (dbg) Run:  kubectl --context kindnet-20210609012810-9941 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:119: (dbg) Run:  out/minikube-linux-amd64 ssh -p bridge-20210609012809-9941 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (8.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:131: (dbg) Run:  kubectl --context bridge-20210609012809-9941 replace --force -f testdata/netcat-deployment.yaml
net_test.go:145: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:335: "netcat-66fbc655d5-5nzzc" [ab385a7f-884b-4419-a902-799d486cecee] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:335: "netcat-66fbc655d5-5nzzc" [ab385a7f-884b-4419-a902-799d486cecee] Running
net_test.go:145: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 8.054786892s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (8.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:162: (dbg) Run:  kubectl --context bridge-20210609012809-9941 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:181: (dbg) Run:  kubectl --context bridge-20210609012809-9941 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:231: (dbg) Run:  kubectl --context bridge-20210609012809-9941 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/KubeletFlags (0.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/KubeletFlags
net_test.go:119: (dbg) Run:  out/minikube-linux-amd64 ssh -p kubenet-20210609012809-9941 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/kubenet/KubeletFlags (0.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/NetCatPod (9.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/NetCatPod
net_test.go:131: (dbg) Run:  kubectl --context kubenet-20210609012809-9941 replace --force -f testdata/netcat-deployment.yaml
net_test.go:145: (dbg) TestNetworkPlugins/group/kubenet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:335: "netcat-66fbc655d5-9lhm2" [7acaf67d-31d6-4a46-9b73-7f7789454ebf] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:335: "netcat-66fbc655d5-9lhm2" [7acaf67d-31d6-4a46-9b73-7f7789454ebf] Running
net_test.go:145: (dbg) TestNetworkPlugins/group/kubenet/NetCatPod: app=netcat healthy within 9.004937855s
--- PASS: TestNetworkPlugins/group/kubenet/NetCatPod (9.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/DNS (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/DNS
net_test.go:162: (dbg) Run:  kubectl --context kubenet-20210609012809-9941 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kubenet/DNS (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/Localhost (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/Localhost
net_test.go:181: (dbg) Run:  kubectl --context kubenet-20210609012809-9941 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kubenet/Localhost (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/HairPin (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/HairPin
net_test.go:231: (dbg) Run:  kubectl --context kubenet-20210609012809-9941 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kubenet/HairPin (0.14s)

                                                
                                    

Test skip (20/266)

x
+
TestDownloadOnly/v1.14.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.14.0/cached-images
aaa_download_only_test.go:119: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.14.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.14.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.14.0/binaries
aaa_download_only_test.go:138: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.14.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.14.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.14.0/kubectl
aaa_download_only_test.go:154: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.14.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.7/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.7/cached-images
aaa_download_only_test.go:119: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.20.7/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.7/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.7/binaries
aaa_download_only_test.go:138: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.20.7/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.7/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.7/kubectl
aaa_download_only_test.go:154: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.20.7/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.22.0-alpha.2/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.22.0-alpha.2/cached-images
aaa_download_only_test.go:119: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.22.0-alpha.2/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.22.0-alpha.2/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.22.0-alpha.2/binaries
aaa_download_only_test.go:138: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.22.0-alpha.2/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.22.0-alpha.2/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.22.0-alpha.2/kubectl
aaa_download_only_test.go:154: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.22.0-alpha.2/kubectl (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:116: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:189: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:472: only validate podman env with docker container runtime, currently testing docker
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:96: DNS forwarding is supported for darwin only now, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:96: DNS forwarding is supported for darwin only now, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:96: DNS forwarding is supported for darwin only now, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:39: Only test none driver.
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:43: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.54s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:91: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:171: Cleaning up "disable-driver-mounts-20210609012934-9941" profile ...
helpers_test.go:174: (dbg) Run:  out/minikube-linux-amd64 delete -p disable-driver-mounts-20210609012934-9941
--- SKIP: TestStartStop/group/disable-driver-mounts (0.54s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel (0.44s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel
net_test.go:76: flannel is not yet compatible with Docker driver: iptables v1.8.3 (legacy): Couldn't load target `CNI-x': No such file or directory
helpers_test.go:171: Cleaning up "flannel-20210609012809-9941" profile ...
helpers_test.go:174: (dbg) Run:  out/minikube-linux-amd64 delete -p flannel-20210609012809-9941
--- SKIP: TestNetworkPlugins/group/flannel (0.44s)

                                                
                                    
Copied to clipboard