Test Report: Docker_Linux_containerd master

                    
                      c31bd57f93d45726e4bd30607374f8c720e70e95
                    
                

Test fail (4/247)

Order failed test Duration
216 TestPause/serial/DeletePaused 711.05
265 TestNetworkPlugins/group/false/Start 637.46
284 TestNetworkPlugins/group/auto/HairPin 0.17
301 TestNetworkPlugins/group/kubenet/Start 629.71
x
+
TestPause/serial/DeletePaused (711.05s)

                                                
                                                
=== RUN   TestPause/serial/DeletePaused
pause_test.go:129: (dbg) Run:  out/minikube-linux-amd64 delete -p pause-20210507222034-391940 --alsologtostderr -v=5

                                                
                                                
=== CONT  TestPause/serial/DeletePaused
pause_test.go:129: (dbg) Non-zero exit: out/minikube-linux-amd64 delete -p pause-20210507222034-391940 --alsologtostderr -v=5: signal: killed (11m46.054709776s)

                                                
                                                
-- stdout --
	* Deleting "pause-20210507222034-391940" in docker ...
	* Deleting container "pause-20210507222034-391940" ...
	* Stopping node "pause-20210507222034-391940"  ...
	* Powering off "pause-20210507222034-391940" via SSH ...

                                                
                                                
-- /stdout --
** stderr ** 
	I0507 22:23:48.105410  562635 out.go:291] Setting OutFile to fd 1 ...
	I0507 22:23:48.105672  562635 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0507 22:23:48.105687  562635 out.go:304] Setting ErrFile to fd 2...
	I0507 22:23:48.105693  562635 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0507 22:23:48.105856  562635 root.go:316] Updating PATH: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/bin
	I0507 22:23:48.106205  562635 cli_runner.go:115] Run: docker ps -a --filter label=name.minikube.sigs.k8s.io --format {{.Names}}
	I0507 22:23:48.159601  562635 delete.go:210] DeleteProfiles
	I0507 22:23:48.159630  562635 delete.go:233] Deleting pause-20210507222034-391940
	I0507 22:23:48.159641  562635 delete.go:238] pause-20210507222034-391940 configuration: &{Name:pause-20210507222034-391940 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.22@sha256:7cc3a3cb6e51c628d8ede157ad9e1f797e8d22a1b3cedc12d3f1999cb52f962e Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.2 ClusterName:pause-20210507222034-391940 Namespace:default APIServerName:minikubeCA APISer
verNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.85.2 Port:8443 KubernetesVersion:v1.20.2 ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false}
	I0507 22:23:48.161704  562635 out.go:170] * Deleting "pause-20210507222034-391940" in docker ...
	I0507 22:23:48.161785  562635 delete.go:48] deleting possible leftovers for pause-20210507222034-391940 (driver=docker) ...
	I0507 22:23:48.161838  562635 cli_runner.go:115] Run: docker ps -a --filter label=name.minikube.sigs.k8s.io=pause-20210507222034-391940 --format {{.Names}}
	I0507 22:23:48.218269  562635 out.go:170] * Deleting container "pause-20210507222034-391940" ...
	I0507 22:23:48.218360  562635 cli_runner.go:115] Run: docker container inspect pause-20210507222034-391940 --format={{.State.Status}}
	I0507 22:23:48.276929  562635 cli_runner.go:115] Run: docker exec --privileged -t pause-20210507222034-391940 /bin/bash -c "sudo init 0"
	I0507 22:23:49.466752  562635 cli_runner.go:115] Run: docker container inspect pause-20210507222034-391940 --format={{.State.Status}}
	I0507 22:23:49.516955  562635 oci.go:646] temporary error: container pause-20210507222034-391940 status is Running but expect it to be exited
	I0507 22:23:49.517002  562635 oci.go:652] Successfully shutdown container pause-20210507222034-391940
	I0507 22:23:49.517057  562635 cli_runner.go:115] Run: docker rm -f -v pause-20210507222034-391940
	W0507 22:28:48.162402  562635 cli_runner.go:162] docker rm -f -v pause-20210507222034-391940 returned with exit code -1
	I0507 22:28:48.162431  562635 cli_runner.go:168] Completed: docker rm -f -v pause-20210507222034-391940: (4m58.645303478s)
	E0507 22:28:48.162502  562635 delete.go:56] error deleting container "pause-20210507222034-391940". You may want to delete it manually :
	delete pause-20210507222034-391940: docker rm -f -v pause-20210507222034-391940: signal: killed
	stdout:
	
	stderr:
	I0507 22:28:48.162528  562635 volumes.go:79] trying to delete all docker volumes with label name.minikube.sigs.k8s.io=pause-20210507222034-391940
	I0507 22:28:48.162641  562635 cli_runner.go:115] Run: docker volume ls --filter label=name.minikube.sigs.k8s.io=pause-20210507222034-391940 --format {{.Name}}
	I0507 22:28:48.203899  562635 cli_runner.go:115] Run: docker volume rm --force pause-20210507222034-391940
	W0507 22:28:48.203949  562635 delete.go:64] error deleting volumes (might be okay).
	To see the list of volumes run: 'docker volume ls'
	:[deleting "pause-20210507222034-391940"]
	I0507 22:28:48.203987  562635 cli_runner.go:115] Run: docker network ls --filter=label=created_by.minikube.sigs.k8s.io --format {{.Name}}
	I0507 22:28:48.244509  562635 cli_runner.go:115] Run: docker network inspect old-k8s-version-20210507222527-391940 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0507 22:28:48.282310  562635 cli_runner.go:115] Run: docker network rm old-k8s-version-20210507222527-391940
	W0507 22:28:48.321852  562635 cli_runner.go:162] docker network rm old-k8s-version-20210507222527-391940 returned with exit code 1
	I0507 22:28:48.321967  562635 cli_runner.go:115] Run: docker network inspect pause-20210507211419-97507 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0507 22:28:48.365003  562635 cli_runner.go:115] Run: docker network rm pause-20210507211419-97507
	W0507 22:28:48.405578  562635 cli_runner.go:162] docker network rm pause-20210507211419-97507 returned with exit code 1
	I0507 22:28:48.405697  562635 cli_runner.go:115] Run: docker network inspect pause-20210507222034-391940 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0507 22:28:48.445560  562635 cli_runner.go:115] Run: docker network rm pause-20210507222034-391940
	W0507 22:28:48.485123  562635 cli_runner.go:162] docker network rm pause-20210507222034-391940 returned with exit code 1
	W0507 22:28:48.485178  562635 delete.go:69] error deleting leftover networks (might be okay).
	To see the list of networks: 'docker network ls'
	:[unable to delete a network that is attached to a running container unable to delete a network that is attached to a running container unable to delete a network that is attached to a running container]
	I0507 22:28:48.485195  562635 volumes.go:101] trying to prune all docker volumes with label name.minikube.sigs.k8s.io=pause-20210507222034-391940
	I0507 22:28:48.485233  562635 cli_runner.go:115] Run: docker volume prune -f --filter label=name.minikube.sigs.k8s.io=pause-20210507222034-391940
	W0507 22:28:48.485257  562635 delete.go:79] error pruning volume (might be okay):
	[prune volume by label name.minikube.sigs.k8s.io=pause-20210507222034-391940: docker volume prune -f --filter label=name.minikube.sigs.k8s.io=pause-20210507222034-391940: context deadline exceeded
	stdout:
	
	stderr:
	]
	I0507 22:28:48.485979  562635 cli_runner.go:115] Run: docker container inspect pause-20210507222034-391940 --format={{.State.Status}}
	I0507 22:28:48.526544  562635 stop.go:39] StopHost: pause-20210507222034-391940
	I0507 22:28:48.540786  562635 out.go:170] * Stopping node "pause-20210507222034-391940"  ...
	I0507 22:28:48.540872  562635 cli_runner.go:115] Run: docker container inspect pause-20210507222034-391940 --format={{.State.Status}}
	W0507 22:28:48.589739  562635 register.go:129] "PowerOff" was not found within the registered steps for "Deleting": [Deleting Stopping Deleting Done]
	I0507 22:28:48.591837  562635 out.go:170] * Powering off "pause-20210507222034-391940" via SSH ...
	I0507 22:28:48.591904  562635 cli_runner.go:115] Run: docker exec --privileged -t pause-20210507222034-391940 /bin/bash -c "sudo init 0"
	W0507 22:28:48.692241  562635 cli_runner.go:162] docker exec --privileged -t pause-20210507222034-391940 /bin/bash -c "sudo init 0" returned with exit code 126
	I0507 22:28:48.692277  562635 oci.go:632] error shutdown pause-20210507222034-391940: docker exec --privileged -t pause-20210507222034-391940 /bin/bash -c "sudo init 0": exit status 126
	stdout:
	OCI runtime exec failed: exec failed: container_linux.go:370: starting container process caused: process_linux.go:103: executing setns process caused: exit status 1: unknown
	
	stderr:
	I0507 22:28:49.692729  562635 cli_runner.go:115] Run: docker container inspect pause-20210507222034-391940 --format={{.State.Status}}
	I0507 22:28:49.736272  562635 oci.go:646] temporary error: container pause-20210507222034-391940 status is Running but expect it to be exited
	I0507 22:28:49.736296  562635 oci.go:652] Successfully shutdown container pause-20210507222034-391940
	I0507 22:28:49.736303  562635 stop.go:88] shutdown container: err=<nil>
	I0507 22:28:49.736353  562635 main.go:128] libmachine: Stopping "pause-20210507222034-391940"...
	I0507 22:28:49.736407  562635 cli_runner.go:115] Run: docker container inspect pause-20210507222034-391940 --format={{.State.Status}}
	I0507 22:28:49.779223  562635 kic_runner.go:94] Run: systemctl --version
	I0507 22:28:49.779251  562635 kic_runner.go:115] Args: [docker exec --privileged pause-20210507222034-391940 systemctl --version]
	I0507 22:28:49.866908  562635 kic_runner.go:94] Run: sudo service kubelet stop
	I0507 22:28:49.866929  562635 kic_runner.go:115] Args: [docker exec --privileged pause-20210507222034-391940 sudo service kubelet stop]
	I0507 22:28:49.958245  562635 openrc.go:161] stop output: -- stdout --
	OCI runtime exec failed: exec failed: container_linux.go:370: starting container process caused: process_linux.go:103: executing setns process caused: exit status 1: unknown
	
	-- /stdout --
	W0507 22:28:49.958273  562635 kic.go:437] couldn't stop kubelet. will continue with stop anyways: sudo service kubelet stop: exit status 126
	stdout:
	OCI runtime exec failed: exec failed: container_linux.go:370: starting container process caused: process_linux.go:103: executing setns process caused: exit status 1: unknown
	
	stderr:
	I0507 22:28:49.958336  562635 kic_runner.go:94] Run: sudo service kubelet stop
	I0507 22:28:49.958353  562635 kic_runner.go:115] Args: [docker exec --privileged pause-20210507222034-391940 sudo service kubelet stop]
	I0507 22:28:50.076864  562635 openrc.go:161] stop output: -- stdout --
	OCI runtime exec failed: exec failed: container_linux.go:370: starting container process caused: process_linux.go:103: executing setns process caused: exit status 1: unknown
	
	-- /stdout --
	W0507 22:28:50.076887  562635 kic.go:439] couldn't force stop kubelet. will continue with stop anyways: sudo service kubelet stop: exit status 126
	stdout:
	OCI runtime exec failed: exec failed: container_linux.go:370: starting container process caused: process_linux.go:103: executing setns process caused: exit status 1: unknown
	
	stderr:
	I0507 22:28:50.076906  562635 cri.go:41] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name: Namespaces:[kube-system kubernetes-dashboard storage-gluster istio-operator]}
	I0507 22:28:50.076991  562635 kic_runner.go:94] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=storage-gluster; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator"
	I0507 22:28:50.077000  562635 kic_runner.go:115] Args: [docker exec --privileged pause-20210507222034-391940 sudo -s eval crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=storage-gluster; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator]
	I0507 22:28:50.181313  562635 kic.go:450] unable list containers : crictl list: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system; crictl ps -a --quiet --label io.kubernetes.pod.namespace=kubernetes-dashboard; crictl ps -a --quiet --label io.kubernetes.pod.namespace=storage-gluster; crictl ps -a --quiet --label io.kubernetes.pod.namespace=istio-operator": exit status 126
	stdout:
	OCI runtime exec failed: exec failed: container_linux.go:370: starting container process caused: process_linux.go:103: executing setns process caused: exit status 1: unknown
	
	stderr:
	I0507 22:28:50.181339  562635 kic.go:460] successfully stopped kubernetes!
	I0507 22:28:50.181424  562635 kic_runner.go:94] Run: pgrep kube-apiserver
	I0507 22:28:50.181434  562635 kic_runner.go:115] Args: [docker exec --privileged pause-20210507222034-391940 pgrep kube-apiserver]

                                                
                                                
** /stderr **
pause_test.go:131: failed to delete minikube with args: "out/minikube-linux-amd64 delete -p pause-20210507222034-391940 --alsologtostderr -v=5" : signal: killed
helpers_test.go:218: -----------------------post-mortem--------------------------------
helpers_test.go:226: ======>  post-mortem[TestPause/serial/DeletePaused]: docker inspect <======
helpers_test.go:227: (dbg) Run:  docker inspect pause-20210507222034-391940
helpers_test.go:231: (dbg) docker inspect pause-20210507222034-391940:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "1f3a30720296e3aa24688d182815dfedb34265abf3aaf9e0f52d1b1736bfb3b3",
	        "Created": "2021-05-07T22:20:36.202407354Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 533530,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2021-05-07T22:20:37.143574617Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:bcd131522525c9c3b8695a8d144be8d177bcd5614ec5397f188115d3be0bbc24",
	        "ResolvConfPath": "/var/lib/docker/containers/1f3a30720296e3aa24688d182815dfedb34265abf3aaf9e0f52d1b1736bfb3b3/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/1f3a30720296e3aa24688d182815dfedb34265abf3aaf9e0f52d1b1736bfb3b3/hostname",
	        "HostsPath": "/var/lib/docker/containers/1f3a30720296e3aa24688d182815dfedb34265abf3aaf9e0f52d1b1736bfb3b3/hosts",
	        "LogPath": "/var/lib/docker/containers/1f3a30720296e3aa24688d182815dfedb34265abf3aaf9e0f52d1b1736bfb3b3/1f3a30720296e3aa24688d182815dfedb34265abf3aaf9e0f52d1b1736bfb3b3-json.log",
	        "Name": "/pause-20210507222034-391940",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "pause-20210507222034-391940:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "pause-20210507222034-391940",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "CapAdd": null,
	            "CapDrop": null,
	            "Capabilities": null,
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 0,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": null,
	            "BlkioDeviceWriteBps": null,
	            "BlkioDeviceReadIOps": null,
	            "BlkioDeviceWriteIOps": null,
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "KernelMemory": 0,
	            "KernelMemoryTCP": 0,
	            "MemoryReservation": 0,
	            "MemorySwap": 0,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/0bede56c504f60cf99391fc18c2ba58383dfabf35052e1094a349da2bb9cd8a7-init/diff:/var/lib/docker/overlay2/1e5fa0ed3c3f4bec9b97cabd8aaa709f5915b54c42d527ba46e8ffa9ebcb7f9a/diff:/var/lib/docker/overlay2/00098e5ff94787f022c282488f937bf3694bcc2f80e6f324f2cb94189fadc609/diff:/var/lib/docker/overlay2/0751219afdacf9c8a75fced952b1ad013a8d5b6fbee07adc96e9f305877d0131/diff:/var/lib/docker/overlay2/4fed3d3ec94e4b275966ac815cabeee3572325ca655dcb69e8d31d2051468a10/diff:/var/lib/docker/overlay2/a78b251d86ddd3460876cbc21fef7421c2e76ba3f3198b79f3af7fe8092297f6/diff:/var/lib/docker/overlay2/f3609509e8e931753320e2da77988a3cdd78a58c167b428b96a3aa29971edb5e/diff:/var/lib/docker/overlay2/ebeb53c34330c6713e55bb0d98076f6618884e3bdcd6b888ad1965c69f65b14d/diff:/var/lib/docker/overlay2/1efdecf3c4a2226dd59cc51906581e2326beec3a6b7090c09e437b80c90794b0/diff:/var/lib/docker/overlay2/4c7309d0146fa644c2eb195cb344f6b10894237fb65248ee8391d1790ac7f765/diff:/var/lib/docker/overlay2/424a19
d5d18bedf5b29c5b9ffd2c72e8c9e112f2fd414acd046bfa963d0526c7/diff:/var/lib/docker/overlay2/1846dd5e13995c56277d370ac401df36ad796851e8f2315dfab9ff02f487b8fc/diff:/var/lib/docker/overlay2/9393786bec1ad7d470bbbb5c7a94ec2131900fa0c6d2ad39b1039fc6795a2683/diff:/var/lib/docker/overlay2/708ff6a0ffe352ea29dabc0c453ebb09ccede3e24ae9f3fb51e06680ed43e597/diff:/var/lib/docker/overlay2/5a536ba767666ddc007ad059bfa077204239088ff6093831b1b5a0aff36a88ea/diff:/var/lib/docker/overlay2/1d4b0ac5e44186da0f4ee859bb5c23df30087789d88e253dfd57e0ffb21bb88c/diff:/var/lib/docker/overlay2/2b67d6a3428317a2f483420befe919fd660743c5f1494d075867507afe929344/diff:/var/lib/docker/overlay2/abef0f23a7f068f22910d10fcf3ed65c4804f84a4a9aa126a6ac79666f87ab63/diff:/var/lib/docker/overlay2/ec0c450f32e0e573b78fc8537f87456c96a10f353e8bb6e28b4cde51d4b78237/diff:/var/lib/docker/overlay2/ba3b904a6ce3d016a1ef237a88f0e5d4d3b08a8c68e6e4c808b54ffb59e19ee3/diff:/var/lib/docker/overlay2/160d3a3a918b002bb27e1f108db150483cfb4c1383ab9bea5f7d5b983af0f57f/diff:/var/lib/d
ocker/overlay2/ed771b935b96f93ce682cdd9d22155225a918436de84fb5d56eb6214e36d7e27/diff:/var/lib/docker/overlay2/a298f74d3f51b9716985e7c6a84a4fe16a9badceeb4fbcc5847e9313a496c203/diff:/var/lib/docker/overlay2/7f4ddade1e222fcfd5747b07b270a54575ecfdbdf23dc72c6aa8984cb14b4f6b/diff:/var/lib/docker/overlay2/8522467e2a2b9517f0e9fe828bf20d40830fb4364323ea1b17c1ae43e68f1633/diff:/var/lib/docker/overlay2/7b8ac1e2dcffd2cd29a0fe315f23ba717abac176d21484016b19e33e1ceb3f15/diff:/var/lib/docker/overlay2/219fbaff646669aefdda08db39e5c449632d42e036ba372e6fbfd2e74d05895c/diff:/var/lib/docker/overlay2/169017ab906e8cd6c768272fbbd27db4564b7ea84520773194f7b8d1c5725ce4/diff:/var/lib/docker/overlay2/3f2355256f7a67382c67f2079a79f9a3568cd4aac75dcb8e549d040ea3e3801c/diff:/var/lib/docker/overlay2/049eedb4ea37711e06782dfa1648c66d0e215e8b8eb540da6bd9b7729e88b4c6/diff:/var/lib/docker/overlay2/685ece42c012e8b988affc555e627ea46a42003f7fb6511dc68fb9da6c515fd8/diff:/var/lib/docker/overlay2/224f8f237d1ebeb57711074d5b9338b377abc164e67d85cd8b482640627
98e8a/diff:/var/lib/docker/overlay2/280191c44865a7db266046c55f36cee27c985b893bca0a97310569a5df684c8a/diff:/var/lib/docker/overlay2/2a04e90c25bcb0264edd485b59f54c8e6c28a2d0c63f696590f1876b164e0ad8/diff:/var/lib/docker/overlay2/9c5536844b05a6fcc7c6de17ba2cd59669716e44474ac06421119d86c04f197e/diff:/var/lib/docker/overlay2/0db732ad07139625742260350f06f46f9978ae313af26f4afdab09884382542c/diff:/var/lib/docker/overlay2/d7e4510c4ab4dcfcd652b63a086da8e4f53866cf61cc72dfacd6e24a7ba895ac/diff",
	                "MergedDir": "/var/lib/docker/overlay2/0bede56c504f60cf99391fc18c2ba58383dfabf35052e1094a349da2bb9cd8a7/merged",
	                "UpperDir": "/var/lib/docker/overlay2/0bede56c504f60cf99391fc18c2ba58383dfabf35052e1094a349da2bb9cd8a7/diff",
	                "WorkDir": "/var/lib/docker/overlay2/0bede56c504f60cf99391fc18c2ba58383dfabf35052e1094a349da2bb9cd8a7/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "pause-20210507222034-391940",
	                "Source": "/var/lib/docker/volumes/pause-20210507222034-391940/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "pause-20210507222034-391940",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase:v0.0.22@sha256:7cc3a3cb6e51c628d8ede157ad9e1f797e8d22a1b3cedc12d3f1999cb52f962e",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "pause-20210507222034-391940",
	                "name.minikube.sigs.k8s.io": "pause-20210507222034-391940",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "8fb257f657a455a5c61064a642d32b732e65559debbded043f37bf425b0822a7",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33197"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33196"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33191"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33195"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33193"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/8fb257f657a4",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "pause-20210507222034-391940": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "1f3a30720296"
	                    ],
	                    "NetworkID": "66090a2bc48e8a0ec3403c0dc0bc3b1b9148ac10b973fc1dc8134d7bbd25b00c",
	                    "EndpointID": "e54131d7b1bfd577149c0050cd0ed718fe2b0322b3e8ec4e35cfefd08f0113ea",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:55:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:235: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p pause-20210507222034-391940 -n pause-20210507222034-391940

                                                
                                                
=== CONT  TestPause/serial/DeletePaused
helpers_test.go:235: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p pause-20210507222034-391940 -n pause-20210507222034-391940: exit status 3 (2.452187525s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0507 22:35:36.554832  643808 status.go:374] failed to get storage capacity of /var: NewSession: new client: new client: ssh: handshake failed: read tcp 127.0.0.1:38830->127.0.0.1:33197: read: connection reset by peer
	E0507 22:35:36.554850  643808 status.go:247] status error: NewSession: new client: new client: ssh: handshake failed: read tcp 127.0.0.1:38830->127.0.0.1:33197: read: connection reset by peer

                                                
                                                
** /stderr **
helpers_test.go:235: status error: exit status 3 (may be ok)
helpers_test.go:237: "pause-20210507222034-391940" host is not running, skipping log retrieval (state="Error")
helpers_test.go:218: -----------------------post-mortem--------------------------------
helpers_test.go:226: ======>  post-mortem[TestPause/serial/DeletePaused]: docker inspect <======
helpers_test.go:227: (dbg) Run:  docker inspect pause-20210507222034-391940
helpers_test.go:231: (dbg) docker inspect pause-20210507222034-391940:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "1f3a30720296e3aa24688d182815dfedb34265abf3aaf9e0f52d1b1736bfb3b3",
	        "Created": "2021-05-07T22:20:36.202407354Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 533530,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2021-05-07T22:20:37.143574617Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:bcd131522525c9c3b8695a8d144be8d177bcd5614ec5397f188115d3be0bbc24",
	        "ResolvConfPath": "/var/lib/docker/containers/1f3a30720296e3aa24688d182815dfedb34265abf3aaf9e0f52d1b1736bfb3b3/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/1f3a30720296e3aa24688d182815dfedb34265abf3aaf9e0f52d1b1736bfb3b3/hostname",
	        "HostsPath": "/var/lib/docker/containers/1f3a30720296e3aa24688d182815dfedb34265abf3aaf9e0f52d1b1736bfb3b3/hosts",
	        "LogPath": "/var/lib/docker/containers/1f3a30720296e3aa24688d182815dfedb34265abf3aaf9e0f52d1b1736bfb3b3/1f3a30720296e3aa24688d182815dfedb34265abf3aaf9e0f52d1b1736bfb3b3-json.log",
	        "Name": "/pause-20210507222034-391940",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "pause-20210507222034-391940:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "pause-20210507222034-391940",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "CapAdd": null,
	            "CapDrop": null,
	            "Capabilities": null,
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 0,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": null,
	            "BlkioDeviceWriteBps": null,
	            "BlkioDeviceReadIOps": null,
	            "BlkioDeviceWriteIOps": null,
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "KernelMemory": 0,
	            "KernelMemoryTCP": 0,
	            "MemoryReservation": 0,
	            "MemorySwap": 0,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": null,
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/0bede56c504f60cf99391fc18c2ba58383dfabf35052e1094a349da2bb9cd8a7-init/diff:/var/lib/docker/overlay2/1e5fa0ed3c3f4bec9b97cabd8aaa709f5915b54c42d527ba46e8ffa9ebcb7f9a/diff:/var/lib/docker/overlay2/00098e5ff94787f022c282488f937bf3694bcc2f80e6f324f2cb94189fadc609/diff:/var/lib/docker/overlay2/0751219afdacf9c8a75fced952b1ad013a8d5b6fbee07adc96e9f305877d0131/diff:/var/lib/docker/overlay2/4fed3d3ec94e4b275966ac815cabeee3572325ca655dcb69e8d31d2051468a10/diff:/var/lib/docker/overlay2/a78b251d86ddd3460876cbc21fef7421c2e76ba3f3198b79f3af7fe8092297f6/diff:/var/lib/docker/overlay2/f3609509e8e931753320e2da77988a3cdd78a58c167b428b96a3aa29971edb5e/diff:/var/lib/docker/overlay2/ebeb53c34330c6713e55bb0d98076f6618884e3bdcd6b888ad1965c69f65b14d/diff:/var/lib/docker/overlay2/1efdecf3c4a2226dd59cc51906581e2326beec3a6b7090c09e437b80c90794b0/diff:/var/lib/docker/overlay2/4c7309d0146fa644c2eb195cb344f6b10894237fb65248ee8391d1790ac7f765/diff:/var/lib/docker/overlay2/424a19
d5d18bedf5b29c5b9ffd2c72e8c9e112f2fd414acd046bfa963d0526c7/diff:/var/lib/docker/overlay2/1846dd5e13995c56277d370ac401df36ad796851e8f2315dfab9ff02f487b8fc/diff:/var/lib/docker/overlay2/9393786bec1ad7d470bbbb5c7a94ec2131900fa0c6d2ad39b1039fc6795a2683/diff:/var/lib/docker/overlay2/708ff6a0ffe352ea29dabc0c453ebb09ccede3e24ae9f3fb51e06680ed43e597/diff:/var/lib/docker/overlay2/5a536ba767666ddc007ad059bfa077204239088ff6093831b1b5a0aff36a88ea/diff:/var/lib/docker/overlay2/1d4b0ac5e44186da0f4ee859bb5c23df30087789d88e253dfd57e0ffb21bb88c/diff:/var/lib/docker/overlay2/2b67d6a3428317a2f483420befe919fd660743c5f1494d075867507afe929344/diff:/var/lib/docker/overlay2/abef0f23a7f068f22910d10fcf3ed65c4804f84a4a9aa126a6ac79666f87ab63/diff:/var/lib/docker/overlay2/ec0c450f32e0e573b78fc8537f87456c96a10f353e8bb6e28b4cde51d4b78237/diff:/var/lib/docker/overlay2/ba3b904a6ce3d016a1ef237a88f0e5d4d3b08a8c68e6e4c808b54ffb59e19ee3/diff:/var/lib/docker/overlay2/160d3a3a918b002bb27e1f108db150483cfb4c1383ab9bea5f7d5b983af0f57f/diff:/var/lib/d
ocker/overlay2/ed771b935b96f93ce682cdd9d22155225a918436de84fb5d56eb6214e36d7e27/diff:/var/lib/docker/overlay2/a298f74d3f51b9716985e7c6a84a4fe16a9badceeb4fbcc5847e9313a496c203/diff:/var/lib/docker/overlay2/7f4ddade1e222fcfd5747b07b270a54575ecfdbdf23dc72c6aa8984cb14b4f6b/diff:/var/lib/docker/overlay2/8522467e2a2b9517f0e9fe828bf20d40830fb4364323ea1b17c1ae43e68f1633/diff:/var/lib/docker/overlay2/7b8ac1e2dcffd2cd29a0fe315f23ba717abac176d21484016b19e33e1ceb3f15/diff:/var/lib/docker/overlay2/219fbaff646669aefdda08db39e5c449632d42e036ba372e6fbfd2e74d05895c/diff:/var/lib/docker/overlay2/169017ab906e8cd6c768272fbbd27db4564b7ea84520773194f7b8d1c5725ce4/diff:/var/lib/docker/overlay2/3f2355256f7a67382c67f2079a79f9a3568cd4aac75dcb8e549d040ea3e3801c/diff:/var/lib/docker/overlay2/049eedb4ea37711e06782dfa1648c66d0e215e8b8eb540da6bd9b7729e88b4c6/diff:/var/lib/docker/overlay2/685ece42c012e8b988affc555e627ea46a42003f7fb6511dc68fb9da6c515fd8/diff:/var/lib/docker/overlay2/224f8f237d1ebeb57711074d5b9338b377abc164e67d85cd8b482640627
98e8a/diff:/var/lib/docker/overlay2/280191c44865a7db266046c55f36cee27c985b893bca0a97310569a5df684c8a/diff:/var/lib/docker/overlay2/2a04e90c25bcb0264edd485b59f54c8e6c28a2d0c63f696590f1876b164e0ad8/diff:/var/lib/docker/overlay2/9c5536844b05a6fcc7c6de17ba2cd59669716e44474ac06421119d86c04f197e/diff:/var/lib/docker/overlay2/0db732ad07139625742260350f06f46f9978ae313af26f4afdab09884382542c/diff:/var/lib/docker/overlay2/d7e4510c4ab4dcfcd652b63a086da8e4f53866cf61cc72dfacd6e24a7ba895ac/diff",
	                "MergedDir": "/var/lib/docker/overlay2/0bede56c504f60cf99391fc18c2ba58383dfabf35052e1094a349da2bb9cd8a7/merged",
	                "UpperDir": "/var/lib/docker/overlay2/0bede56c504f60cf99391fc18c2ba58383dfabf35052e1094a349da2bb9cd8a7/diff",
	                "WorkDir": "/var/lib/docker/overlay2/0bede56c504f60cf99391fc18c2ba58383dfabf35052e1094a349da2bb9cd8a7/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "pause-20210507222034-391940",
	                "Source": "/var/lib/docker/volumes/pause-20210507222034-391940/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "pause-20210507222034-391940",
	            "Domainname": "",
	            "User": "root",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase:v0.0.22@sha256:7cc3a3cb6e51c628d8ede157ad9e1f797e8d22a1b3cedc12d3f1999cb52f962e",
	            "Volumes": null,
	            "WorkingDir": "",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "pause-20210507222034-391940",
	                "name.minikube.sigs.k8s.io": "pause-20210507222034-391940",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "8fb257f657a455a5c61064a642d32b732e65559debbded043f37bf425b0822a7",
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33197"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33196"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33191"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33195"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33193"
	                    }
	                ]
	            },
	            "SandboxKey": "/var/run/docker/netns/8fb257f657a4",
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "pause-20210507222034-391940": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.85.2"
	                    },
	                    "Links": null,
	                    "Aliases": [
	                        "1f3a30720296"
	                    ],
	                    "NetworkID": "66090a2bc48e8a0ec3403c0dc0bc3b1b9148ac10b973fc1dc8134d7bbd25b00c",
	                    "EndpointID": "e54131d7b1bfd577149c0050cd0ed718fe2b0322b3e8ec4e35cfefd08f0113ea",
	                    "Gateway": "192.168.85.1",
	                    "IPAddress": "192.168.85.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "MacAddress": "02:42:c0:a8:55:02",
	                    "DriverOpts": null
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:235: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p pause-20210507222034-391940 -n pause-20210507222034-391940
helpers_test.go:235: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p pause-20210507222034-391940 -n pause-20210507222034-391940: exit status 3 (2.449134759s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0507 22:35:39.047822  643912 status.go:374] failed to get storage capacity of /var: NewSession: new client: new client: ssh: handshake failed: read tcp 127.0.0.1:38890->127.0.0.1:33197: read: connection reset by peer
	E0507 22:35:39.047842  643912 status.go:247] status error: NewSession: new client: new client: ssh: handshake failed: read tcp 127.0.0.1:38890->127.0.0.1:33197: read: connection reset by peer

                                                
                                                
** /stderr **
helpers_test.go:235: status error: exit status 3 (may be ok)
helpers_test.go:237: "pause-20210507222034-391940" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestPause/serial/DeletePaused (711.05s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/Start (637.46s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/Start
net_test.go:83: (dbg) Run:  out/minikube-linux-amd64 start -p false-20210507223341-391940 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=false --driver=docker  --container-runtime=containerd
E0507 22:34:02.771747  391940 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/profiles/old-k8s-version-20210507222527-391940/client.crt: no such file or directory
E0507 22:34:36.574324  391940 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/profiles/no-preload-20210507222537-391940/client.crt: no such file or directory

                                                
                                                
=== CONT  TestNetworkPlugins/group/false/Start
net_test.go:83: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p false-20210507223341-391940 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=false --driver=docker  --container-runtime=containerd: exit status 80 (10m37.42338438s)

                                                
                                                
-- stdout --
	* [false-20210507223341-391940] minikube v1.20.0 on Debian 9.13 (kvm/amd64)
	  - KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/kubeconfig
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube
	  - MINIKUBE_LOCATION=master
	* Using the docker driver based on user configuration
	  - More information: https://docs.docker.com/engine/install/linux-postinstall/#your-kernel-does-not-support-cgroup-swap-limit-capabilities
	* Starting control plane node false-20210507223341-391940 in cluster false-20210507223341-391940
	* Pulling base image ...
	* Creating docker container (CPUs=2, Memory=2048MB) ...
	* Preparing Kubernetes v1.20.2 on containerd 1.4.4 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Configuring RBAC rules ...
	* Verifying Kubernetes components...
	  - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	* Enabled addons: storage-provisioner, default-storageclass
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0507 22:33:41.671845  634245 out.go:291] Setting OutFile to fd 1 ...
	I0507 22:33:41.672046  634245 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0507 22:33:41.672056  634245 out.go:304] Setting ErrFile to fd 2...
	I0507 22:33:41.672061  634245 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0507 22:33:41.672166  634245 root.go:316] Updating PATH: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/bin
	I0507 22:33:41.672433  634245 out.go:298] Setting JSON to false
	I0507 22:33:41.711668  634245 start.go:108] hostinfo: {"hostname":"debian-jenkins-agent-11","uptime":11589,"bootTime":1620415232,"procs":319,"os":"linux","platform":"debian","platformFamily":"debian","platformVersion":"9.13","kernelVersion":"4.9.0-15-amd64","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"c29e0b88-ef83-6765-d2fa-208fdce1af32"}
	I0507 22:33:41.711780  634245 start.go:118] virtualization: kvm guest
	I0507 22:33:41.714469  634245 out.go:170] * [false-20210507223341-391940] minikube v1.20.0 on Debian 9.13 (kvm/amd64)
	I0507 22:33:41.716167  634245 out.go:170]   - KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/kubeconfig
	I0507 22:33:41.717699  634245 out.go:170]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0507 22:33:41.719186  634245 out.go:170]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube
	I0507 22:33:41.720587  634245 out.go:170]   - MINIKUBE_LOCATION=master
	I0507 22:33:41.721258  634245 driver.go:322] Setting default libvirt URI to qemu:///system
	I0507 22:33:41.768773  634245 docker.go:119] docker version: linux-19.03.15
	I0507 22:33:41.768875  634245 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	I0507 22:33:41.848661  634245 info.go:261] docker info: {ID:LQL6:IQEY:TAE3:NG47:ROZQ:WA5O:XM2B:XDCN:3VXZ:7JF3:4DHA:WN5N Containers:4 ContainersRunning:4 ContainersPaused:0 ContainersStopped:0 Images:131 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:false KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:59 OomKillDisable:true NGoroutines:70 SystemTime:2021-05-07 22:33:41.804081662 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:4.9.0-15-amd64 OperatingSystem:Debian GNU/Linux 9 (stretch) OSType:linux Architecture:x86_64 IndexServerAddr
ess:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33742209024 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:debian-jenkins-agent-11 Labels:[] ExperimentalBuild:false ServerVersion:19.03.15 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:269548fa27e0089a8b8278fc4fc781d7f65a939b Expected:269548fa27e0089a8b8278fc4fc781d7f65a939b} RuncCommit:{ID:ff819c7e9184c13b7c2607fe6c30ae19403a7aff Expected:ff819c7e9184c13b7c2607fe6c30ae19403a7aff} InitCommit:{ID:fec3683 Expected:fec3683} SecurityOptions:[name=seccomp,profile=default] ProductLicense: Warni
ngs:[WARNING: No swap limit support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[] Warnings:<nil>}}
	I0507 22:33:41.848754  634245 docker.go:225] overlay module found
	I0507 22:33:41.850975  634245 out.go:170] * Using the docker driver based on user configuration
	I0507 22:33:41.851009  634245 start.go:276] selected driver: docker
	I0507 22:33:41.851014  634245 start.go:718] validating driver "docker" against <nil>
	I0507 22:33:41.851041  634245 start.go:729] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc:}
	W0507 22:33:41.851085  634245 oci.go:119] Your kernel does not support memory limit capabilities or the cgroup is not mounted.
	W0507 22:33:41.851095  634245 out.go:424] no arguments passed for "! Your cgroup does not allow setting memory.\n" - returning raw string
	W0507 22:33:41.851110  634245 out.go:235] ! Your cgroup does not allow setting memory.
	! Your cgroup does not allow setting memory.
	W0507 22:33:41.851118  634245 out.go:424] no arguments passed for "  - More information: https://docs.docker.com/engine/install/linux-postinstall/#your-kernel-does-not-support-cgroup-swap-limit-capabilities\n" - returning raw string
	I0507 22:33:41.852536  634245 out.go:170]   - More information: https://docs.docker.com/engine/install/linux-postinstall/#your-kernel-does-not-support-cgroup-swap-limit-capabilities
	I0507 22:33:41.853360  634245 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	I0507 22:33:41.931700  634245 info.go:261] docker info: {ID:LQL6:IQEY:TAE3:NG47:ROZQ:WA5O:XM2B:XDCN:3VXZ:7JF3:4DHA:WN5N Containers:4 ContainersRunning:4 ContainersPaused:0 ContainersStopped:0 Images:131 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:false KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:59 OomKillDisable:true NGoroutines:70 SystemTime:2021-05-07 22:33:41.888267241 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:4.9.0-15-amd64 OperatingSystem:Debian GNU/Linux 9 (stretch) OSType:linux Architecture:x86_64 IndexServerAddr
ess:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33742209024 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:debian-jenkins-agent-11 Labels:[] ExperimentalBuild:false ServerVersion:19.03.15 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:269548fa27e0089a8b8278fc4fc781d7f65a939b Expected:269548fa27e0089a8b8278fc4fc781d7f65a939b} RuncCommit:{ID:ff819c7e9184c13b7c2607fe6c30ae19403a7aff Expected:ff819c7e9184c13b7c2607fe6c30ae19403a7aff} InitCommit:{ID:fec3683 Expected:fec3683} SecurityOptions:[name=seccomp,profile=default] ProductLicense: Warni
ngs:[WARNING: No swap limit support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[] Warnings:<nil>}}
	I0507 22:33:41.931825  634245 start_flags.go:259] no existing cluster config was found, will generate one from the flags 
	I0507 22:33:41.932032  634245 start_flags.go:733] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0507 22:33:41.932082  634245 cni.go:93] Creating CNI manager for "false"
	I0507 22:33:41.932094  634245 start_flags.go:273] config:
	{Name:false-20210507223341-391940 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.22@sha256:7cc3a3cb6e51c628d8ede157ad9e1f797e8d22a1b3cedc12d3f1999cb52f962e Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.2 ClusterName:false-20210507223341-391940 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket
: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:false NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:5m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false}
	I0507 22:33:41.934456  634245 out.go:170] * Starting control plane node false-20210507223341-391940 in cluster false-20210507223341-391940
	I0507 22:33:41.934501  634245 cache.go:111] Beginning downloading kic base image for docker with containerd
	W0507 22:33:41.934512  634245 out.go:424] no arguments passed for "* Pulling base image ...\n" - returning raw string
	W0507 22:33:41.934540  634245 out.go:424] no arguments passed for "* Pulling base image ...\n" - returning raw string
	I0507 22:33:41.936106  634245 out.go:170] * Pulling base image ...
	I0507 22:33:41.936144  634245 preload.go:98] Checking if preload exists for k8s version v1.20.2 and runtime containerd
	I0507 22:33:41.936172  634245 preload.go:106] Found local preload: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v10-v1.20.2-containerd-overlay2-amd64.tar.lz4
	I0507 22:33:41.936183  634245 cache.go:54] Caching tarball of preloaded images
	I0507 22:33:41.936194  634245 preload.go:132] Found /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v10-v1.20.2-containerd-overlay2-amd64.tar.lz4 in cache, skipping download
	I0507 22:33:41.936201  634245 cache.go:57] Finished verifying existence of preloaded tar for  v1.20.2 on containerd
	I0507 22:33:41.936254  634245 image.go:116] Checking for gcr.io/k8s-minikube/kicbase:v0.0.22@sha256:7cc3a3cb6e51c628d8ede157ad9e1f797e8d22a1b3cedc12d3f1999cb52f962e in local cache directory
	I0507 22:33:41.936279  634245 image.go:119] Found gcr.io/k8s-minikube/kicbase:v0.0.22@sha256:7cc3a3cb6e51c628d8ede157ad9e1f797e8d22a1b3cedc12d3f1999cb52f962e in local cache directory, skipping pull
	I0507 22:33:41.936286  634245 cache.go:131] gcr.io/k8s-minikube/kicbase:v0.0.22@sha256:7cc3a3cb6e51c628d8ede157ad9e1f797e8d22a1b3cedc12d3f1999cb52f962e exists in cache, skipping pull
	I0507 22:33:41.936286  634245 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/profiles/false-20210507223341-391940/config.json ...
	I0507 22:33:41.936312  634245 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/profiles/false-20210507223341-391940/config.json: {Name:mk23ccd7c8d362b864a360f03438469e7fe31500 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0507 22:33:41.936323  634245 image.go:130] Checking for gcr.io/k8s-minikube/kicbase:v0.0.22@sha256:7cc3a3cb6e51c628d8ede157ad9e1f797e8d22a1b3cedc12d3f1999cb52f962e in local docker daemon
	I0507 22:33:42.013749  634245 image.go:134] Found gcr.io/k8s-minikube/kicbase:v0.0.22@sha256:7cc3a3cb6e51c628d8ede157ad9e1f797e8d22a1b3cedc12d3f1999cb52f962e in local docker daemon, skipping pull
	I0507 22:33:42.013781  634245 cache.go:155] gcr.io/k8s-minikube/kicbase:v0.0.22@sha256:7cc3a3cb6e51c628d8ede157ad9e1f797e8d22a1b3cedc12d3f1999cb52f962e exists in daemon, skipping pull
	I0507 22:33:42.013802  634245 cache.go:194] Successfully downloaded all kic artifacts
	I0507 22:33:42.013839  634245 start.go:313] acquiring machines lock for false-20210507223341-391940: {Name:mk7a6d8cf53705fef8003241594dc2d2b6aceaa5 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0507 22:33:42.013993  634245 start.go:317] acquired machines lock for "false-20210507223341-391940" in 129.802µs
	I0507 22:33:42.014032  634245 start.go:89] Provisioning new machine with config: &{Name:false-20210507223341-391940 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.22@sha256:7cc3a3cb6e51c628d8ede157ad9e1f797e8d22a1b3cedc12d3f1999cb52f962e Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.2 ClusterName:false-20210507223341-391940 Namespace:default APIServerName:minikubeCA APIServerName
s:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:false NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.2 ControlPlane:true Worker:true}] Addons:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:5m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false} &{Name: IP: Port:8443 KubernetesVersion:v1.20.2 ControlPlane:true Worker:true}
	I0507 22:33:42.014139  634245 start.go:126] createHost starting for "" (driver="docker")
	I0507 22:33:42.017083  634245 out.go:197] * Creating docker container (CPUs=2, Memory=2048MB) ...
	I0507 22:33:42.017373  634245 start.go:160] libmachine.API.Create for "false-20210507223341-391940" (driver="docker")
	I0507 22:33:42.017412  634245 client.go:168] LocalClient.Create starting
	I0507 22:33:42.017494  634245 main.go:128] libmachine: Reading certificate data from /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/certs/ca.pem
	I0507 22:33:42.017533  634245 main.go:128] libmachine: Decoding PEM data...
	I0507 22:33:42.017560  634245 main.go:128] libmachine: Parsing certificate...
	I0507 22:33:42.017747  634245 main.go:128] libmachine: Reading certificate data from /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/certs/cert.pem
	I0507 22:33:42.017775  634245 main.go:128] libmachine: Decoding PEM data...
	I0507 22:33:42.017795  634245 main.go:128] libmachine: Parsing certificate...
	I0507 22:33:42.018231  634245 cli_runner.go:115] Run: docker network inspect false-20210507223341-391940 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0507 22:33:42.060979  634245 cli_runner.go:162] docker network inspect false-20210507223341-391940 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0507 22:33:42.061043  634245 network_create.go:249] running [docker network inspect false-20210507223341-391940] to gather additional debugging logs...
	I0507 22:33:42.061062  634245 cli_runner.go:115] Run: docker network inspect false-20210507223341-391940
	W0507 22:33:42.098833  634245 cli_runner.go:162] docker network inspect false-20210507223341-391940 returned with exit code 1
	I0507 22:33:42.098863  634245 network_create.go:252] error running [docker network inspect false-20210507223341-391940]: docker network inspect false-20210507223341-391940: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: false-20210507223341-391940
	I0507 22:33:42.098875  634245 network_create.go:254] output of [docker network inspect false-20210507223341-391940]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: false-20210507223341-391940
	
	** /stderr **
	I0507 22:33:42.098926  634245 cli_runner.go:115] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0507 22:33:42.136375  634245 network.go:215] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName:br-b7a55e9e83b1 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:02:42:be:99:f6:89}}
	I0507 22:33:42.137655  634245 network.go:215] skipping subnet 192.168.58.0/24 that is taken: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 Interface:{IfaceName:br-d814ab98e4bf IfaceIPv4:192.168.58.1 IfaceMTU:1500 IfaceMAC:02:42:cf:75:be:bd}}
	I0507 22:33:42.138675  634245 network.go:263] reserving subnet 192.168.67.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[] amended:true}} dirty:map[192.168.67.0:0xc000010dc0] misses:0}
	I0507 22:33:42.138709  634245 network.go:210] using free private subnet 192.168.67.0/24: &{IP:192.168.67.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.67.0/24 Gateway:192.168.67.1 ClientMin:192.168.67.2 ClientMax:192.168.67.254 Broadcast:192.168.67.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0507 22:33:42.138734  634245 network_create.go:100] attempt to create docker network false-20210507223341-391940 192.168.67.0/24 with gateway 192.168.67.1 and MTU of 1500 ...
	I0507 22:33:42.138776  634245 cli_runner.go:115] Run: docker network create --driver=bridge --subnet=192.168.67.0/24 --gateway=192.168.67.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true false-20210507223341-391940
	I0507 22:33:42.210504  634245 network_create.go:84] docker network false-20210507223341-391940 192.168.67.0/24 created
	I0507 22:33:42.210539  634245 kic.go:106] calculated static IP "192.168.67.2" for the "false-20210507223341-391940" container
	I0507 22:33:42.210589  634245 cli_runner.go:115] Run: docker ps -a --format {{.Names}}
	I0507 22:33:42.249117  634245 cli_runner.go:115] Run: docker volume create false-20210507223341-391940 --label name.minikube.sigs.k8s.io=false-20210507223341-391940 --label created_by.minikube.sigs.k8s.io=true
	I0507 22:33:42.287717  634245 oci.go:102] Successfully created a docker volume false-20210507223341-391940
	I0507 22:33:42.287789  634245 cli_runner.go:115] Run: docker run --rm --name false-20210507223341-391940-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=false-20210507223341-391940 --entrypoint /usr/bin/test -v false-20210507223341-391940:/var gcr.io/k8s-minikube/kicbase:v0.0.22@sha256:7cc3a3cb6e51c628d8ede157ad9e1f797e8d22a1b3cedc12d3f1999cb52f962e -d /var/lib
	I0507 22:33:43.054610  634245 oci.go:106] Successfully prepared a docker volume false-20210507223341-391940
	W0507 22:33:43.054673  634245 oci.go:135] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W0507 22:33:43.054683  634245 oci.go:119] Your kernel does not support memory limit capabilities or the cgroup is not mounted.
	I0507 22:33:43.054754  634245 cli_runner.go:115] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0507 22:33:43.054752  634245 preload.go:98] Checking if preload exists for k8s version v1.20.2 and runtime containerd
	I0507 22:33:43.054822  634245 preload.go:106] Found local preload: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v10-v1.20.2-containerd-overlay2-amd64.tar.lz4
	I0507 22:33:43.054834  634245 kic.go:179] Starting extracting preloaded images to volume ...
	I0507 22:33:43.054876  634245 cli_runner.go:115] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v10-v1.20.2-containerd-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v false-20210507223341-391940:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.22@sha256:7cc3a3cb6e51c628d8ede157ad9e1f797e8d22a1b3cedc12d3f1999cb52f962e -I lz4 -xf /preloaded.tar -C /extractDir
	I0507 22:33:43.139196  634245 cli_runner.go:115] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname false-20210507223341-391940 --name false-20210507223341-391940 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=false-20210507223341-391940 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=false-20210507223341-391940 --network false-20210507223341-391940 --ip 192.168.67.2 --volume false-20210507223341-391940:/var --security-opt apparmor=unconfined --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase:v0.0.22@sha256:7cc3a3cb6e51c628d8ede157ad9e1f797e8d22a1b3cedc12d3f1999cb52f962e
	I0507 22:33:43.680563  634245 cli_runner.go:115] Run: docker container inspect false-20210507223341-391940 --format={{.State.Running}}
	I0507 22:33:43.734390  634245 cli_runner.go:115] Run: docker container inspect false-20210507223341-391940 --format={{.State.Status}}
	I0507 22:33:43.788488  634245 cli_runner.go:115] Run: docker exec false-20210507223341-391940 stat /var/lib/dpkg/alternatives/iptables
	I0507 22:33:43.915111  634245 oci.go:278] the created container "false-20210507223341-391940" has a running status.
	I0507 22:33:43.915155  634245 kic.go:210] Creating ssh key for kic: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/machines/false-20210507223341-391940/id_rsa...
	I0507 22:33:44.025939  634245 kic_runner.go:188] docker (temp): /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/machines/false-20210507223341-391940/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0507 22:33:44.438055  634245 cli_runner.go:115] Run: docker container inspect false-20210507223341-391940 --format={{.State.Status}}
	I0507 22:33:44.483762  634245 kic_runner.go:94] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0507 22:33:44.483785  634245 kic_runner.go:115] Args: [docker exec --privileged false-20210507223341-391940 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0507 22:33:47.326874  634245 cli_runner.go:168] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v10-v1.20.2-containerd-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v false-20210507223341-391940:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.22@sha256:7cc3a3cb6e51c628d8ede157ad9e1f797e8d22a1b3cedc12d3f1999cb52f962e -I lz4 -xf /preloaded.tar -C /extractDir: (4.27195433s)
	I0507 22:33:47.326921  634245 kic.go:188] duration metric: took 4.272083 seconds to extract preloaded images to volume
	I0507 22:33:47.327008  634245 cli_runner.go:115] Run: docker container inspect false-20210507223341-391940 --format={{.State.Status}}
	I0507 22:33:47.368924  634245 machine.go:88] provisioning docker machine ...
	I0507 22:33:47.368961  634245 ubuntu.go:169] provisioning hostname "false-20210507223341-391940"
	I0507 22:33:47.369027  634245 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" false-20210507223341-391940
	I0507 22:33:47.406971  634245 main.go:128] libmachine: Using SSH client type: native
	I0507 22:33:47.407198  634245 main.go:128] libmachine: &{{{<nil> 0 [] [] []} docker [0x802720] 0x8026e0 <nil>  [] 0s} 127.0.0.1 33291 <nil> <nil>}
	I0507 22:33:47.407218  634245 main.go:128] libmachine: About to run SSH command:
	sudo hostname false-20210507223341-391940 && echo "false-20210507223341-391940" | sudo tee /etc/hostname
	I0507 22:33:47.535710  634245 main.go:128] libmachine: SSH cmd err, output: <nil>: false-20210507223341-391940
	
	I0507 22:33:47.535796  634245 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" false-20210507223341-391940
	I0507 22:33:47.586481  634245 main.go:128] libmachine: Using SSH client type: native
	I0507 22:33:47.586672  634245 main.go:128] libmachine: &{{{<nil> 0 [] [] []} docker [0x802720] 0x8026e0 <nil>  [] 0s} 127.0.0.1 33291 <nil> <nil>}
	I0507 22:33:47.586696  634245 main.go:128] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sfalse-20210507223341-391940' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 false-20210507223341-391940/g' /etc/hosts;
				else 
					echo '127.0.1.1 false-20210507223341-391940' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0507 22:33:47.703075  634245 main.go:128] libmachine: SSH cmd err, output: <nil>: 
	I0507 22:33:47.703107  634245 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube CaCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720
e70e95/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube}
	I0507 22:33:47.703155  634245 ubuntu.go:177] setting up certificates
	I0507 22:33:47.703165  634245 provision.go:83] configureAuth start
	I0507 22:33:47.703224  634245 cli_runner.go:115] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" false-20210507223341-391940
	I0507 22:33:47.748068  634245 provision.go:137] copyHostCerts
	I0507 22:33:47.748132  634245 exec_runner.go:145] found /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/ca.pem, removing ...
	I0507 22:33:47.748148  634245 exec_runner.go:190] rm: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/ca.pem
	I0507 22:33:47.748207  634245 exec_runner.go:152] cp: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/ca.pem (1078 bytes)
	I0507 22:33:47.748309  634245 exec_runner.go:145] found /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/cert.pem, removing ...
	I0507 22:33:47.748328  634245 exec_runner.go:190] rm: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/cert.pem
	I0507 22:33:47.748356  634245 exec_runner.go:152] cp: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/cert.pem (1123 bytes)
	I0507 22:33:47.748491  634245 exec_runner.go:145] found /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/key.pem, removing ...
	I0507 22:33:47.748505  634245 exec_runner.go:190] rm: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/key.pem
	I0507 22:33:47.748531  634245 exec_runner.go:152] cp: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/key.pem (1675 bytes)
	I0507 22:33:47.748587  634245 provision.go:111] generating server cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/certs/ca-key.pem org=jenkins.false-20210507223341-391940 san=[192.168.67.2 127.0.0.1 localhost 127.0.0.1 minikube false-20210507223341-391940]
	I0507 22:33:48.003964  634245 provision.go:165] copyRemoteCerts
	I0507 22:33:48.004046  634245 ssh_runner.go:149] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0507 22:33:48.004111  634245 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" false-20210507223341-391940
	I0507 22:33:48.050112  634245 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33291 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/machines/false-20210507223341-391940/id_rsa Username:docker}
	I0507 22:33:48.135189  634245 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0507 22:33:48.154693  634245 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/machines/server.pem --> /etc/docker/server.pem (1253 bytes)
	I0507 22:33:48.174607  634245 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0507 22:33:48.194288  634245 provision.go:86] duration metric: configureAuth took 491.105993ms
	I0507 22:33:48.194317  634245 ubuntu.go:193] setting minikube options for container-runtime
	I0507 22:33:48.194495  634245 machine.go:91] provisioned docker machine in 825.54959ms
	I0507 22:33:48.194509  634245 client.go:171] LocalClient.Create took 6.177087062s
	I0507 22:33:48.194539  634245 start.go:168] duration metric: libmachine.API.Create for "false-20210507223341-391940" took 6.177154104s
	I0507 22:33:48.194555  634245 start.go:267] post-start starting for "false-20210507223341-391940" (driver="docker")
	I0507 22:33:48.194561  634245 start.go:277] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0507 22:33:48.194620  634245 ssh_runner.go:149] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0507 22:33:48.194669  634245 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" false-20210507223341-391940
	I0507 22:33:48.242303  634245 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33291 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/machines/false-20210507223341-391940/id_rsa Username:docker}
	I0507 22:33:48.326894  634245 ssh_runner.go:149] Run: cat /etc/os-release
	I0507 22:33:48.329970  634245 main.go:128] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0507 22:33:48.329997  634245 main.go:128] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0507 22:33:48.330011  634245 main.go:128] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0507 22:33:48.330019  634245 info.go:137] Remote host: Ubuntu 20.04.2 LTS
	I0507 22:33:48.330034  634245 filesync.go:118] Scanning /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/addons for local assets ...
	I0507 22:33:48.330082  634245 filesync.go:118] Scanning /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/files for local assets ...
	I0507 22:33:48.330191  634245 start.go:270] post-start completed in 135.629287ms
	I0507 22:33:48.330503  634245 cli_runner.go:115] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" false-20210507223341-391940
	I0507 22:33:48.380856  634245 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/profiles/false-20210507223341-391940/config.json ...
	I0507 22:33:48.381145  634245 ssh_runner.go:149] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0507 22:33:48.381186  634245 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" false-20210507223341-391940
	I0507 22:33:48.426083  634245 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33291 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/machines/false-20210507223341-391940/id_rsa Username:docker}
	I0507 22:33:48.507808  634245 start.go:129] duration metric: createHost completed in 6.493653731s
	I0507 22:33:48.507837  634245 start.go:80] releasing machines lock for "false-20210507223341-391940", held for 6.493829378s
	I0507 22:33:48.507931  634245 cli_runner.go:115] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" false-20210507223341-391940
	I0507 22:33:48.554550  634245 ssh_runner.go:149] Run: systemctl --version
	I0507 22:33:48.554610  634245 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" false-20210507223341-391940
	I0507 22:33:48.554623  634245 ssh_runner.go:149] Run: curl -sS -m 2 https://k8s.gcr.io/
	I0507 22:33:48.554686  634245 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" false-20210507223341-391940
	I0507 22:33:48.601708  634245 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33291 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/machines/false-20210507223341-391940/id_rsa Username:docker}
	I0507 22:33:48.603836  634245 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33291 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/machines/false-20210507223341-391940/id_rsa Username:docker}
	I0507 22:33:48.753079  634245 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service crio
	I0507 22:33:48.762387  634245 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service docker
	I0507 22:33:48.770850  634245 ssh_runner.go:149] Run: sudo systemctl stop -f docker.socket
	I0507 22:33:48.786581  634245 ssh_runner.go:149] Run: sudo systemctl stop -f docker.service
	I0507 22:33:48.795123  634245 ssh_runner.go:149] Run: sudo systemctl disable docker.socket
	I0507 22:33:48.872198  634245 ssh_runner.go:149] Run: sudo systemctl mask docker.service
	I0507 22:33:48.939801  634245 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service docker
	I0507 22:33:48.950402  634245 ssh_runner.go:149] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	image-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0507 22:33:48.964840  634245 ssh_runner.go:149] Run: /bin/bash -c "sudo mkdir -p /etc/containerd && printf %s "cm9vdCA9ICIvdmFyL2xpYi9jb250YWluZXJkIgpzdGF0ZSA9ICIvcnVuL2NvbnRhaW5lcmQiCm9vbV9zY29yZSA9IDAKCltncnBjXQogIGFkZHJlc3MgPSAiL3J1bi9jb250YWluZXJkL2NvbnRhaW5lcmQuc29jayIKICB1aWQgPSAwCiAgZ2lkID0gMAogIG1heF9yZWN2X21lc3NhZ2Vfc2l6ZSA9IDE2Nzc3MjE2CiAgbWF4X3NlbmRfbWVzc2FnZV9zaXplID0gMTY3NzcyMTYKCltkZWJ1Z10KICBhZGRyZXNzID0gIiIKICB1aWQgPSAwCiAgZ2lkID0gMAogIGxldmVsID0gIiIKClttZXRyaWNzXQogIGFkZHJlc3MgPSAiIgogIGdycGNfaGlzdG9ncmFtID0gZmFsc2UKCltjZ3JvdXBdCiAgcGF0aCA9ICIiCgpbcGx1Z2luc10KICBbcGx1Z2lucy5jZ3JvdXBzXQogICAgbm9fcHJvbWV0aGV1cyA9IGZhbHNlCiAgW3BsdWdpbnMuY3JpXQogICAgc3RyZWFtX3NlcnZlcl9hZGRyZXNzID0gIiIKICAgIHN0cmVhbV9zZXJ2ZXJfcG9ydCA9ICIxMDAxMCIKICAgIGVuYWJsZV9zZWxpbnV4ID0gZmFsc2UKICAgIHNhbmRib3hfaW1hZ2UgPSAiazhzLmdjci5pby9wYXVzZTozLjIiCiAgICBzdGF0c19jb2xsZWN0X3BlcmlvZCA9IDEwCiAgICBzeXN0ZW1kX2Nncm91cCA9IGZhbHNlCiAgICBlbmFibGVfdGxzX3N0cmVhbWluZyA9IGZhbHNlCiAgICBtYXhfY29udGFpbmVyX2xvZ19saW5lX3NpemUgPSAxNjM
4NAogICAgW3BsdWdpbnMuY3JpLmNvbnRhaW5lcmRdCiAgICAgIHNuYXBzaG90dGVyID0gIm92ZXJsYXlmcyIKICAgICAgbm9fcGl2b3QgPSB0cnVlCiAgICAgIFtwbHVnaW5zLmNyaS5jb250YWluZXJkLmRlZmF1bHRfcnVudGltZV0KICAgICAgICBydW50aW1lX3R5cGUgPSAiaW8uY29udGFpbmVyZC5ydW50aW1lLnYxLmxpbnV4IgogICAgICAgIHJ1bnRpbWVfZW5naW5lID0gIiIKICAgICAgICBydW50aW1lX3Jvb3QgPSAiIgogICAgICBbcGx1Z2lucy5jcmkuY29udGFpbmVyZC51bnRydXN0ZWRfd29ya2xvYWRfcnVudGltZV0KICAgICAgICBydW50aW1lX3R5cGUgPSAiIgogICAgICAgIHJ1bnRpbWVfZW5naW5lID0gIiIKICAgICAgICBydW50aW1lX3Jvb3QgPSAiIgogICAgW3BsdWdpbnMuY3JpLmNuaV0KICAgICAgYmluX2RpciA9ICIvb3B0L2NuaS9iaW4iCiAgICAgIGNvbmZfZGlyID0gIi9ldGMvY25pL25ldC5kIgogICAgICBjb25mX3RlbXBsYXRlID0gIiIKICAgIFtwbHVnaW5zLmNyaS5yZWdpc3RyeV0KICAgICAgW3BsdWdpbnMuY3JpLnJlZ2lzdHJ5Lm1pcnJvcnNdCiAgICAgICAgW3BsdWdpbnMuY3JpLnJlZ2lzdHJ5Lm1pcnJvcnMuImRvY2tlci5pbyJdCiAgICAgICAgICBlbmRwb2ludCA9IFsiaHR0cHM6Ly9yZWdpc3RyeS0xLmRvY2tlci5pbyJdCiAgICAgICAgW3BsdWdpbnMuZGlmZi1zZXJ2aWNlXQogICAgZGVmYXVsdCA9IFsid2Fsa2luZyJdCiAgW3BsdWdpbnMubGludXhdCiAgICBzaGltID0gImNvbnRhaW5lcmQtc2hpbSIKICA
gIHJ1bnRpbWUgPSAicnVuYyIKICAgIHJ1bnRpbWVfcm9vdCA9ICIiCiAgICBub19zaGltID0gZmFsc2UKICAgIHNoaW1fZGVidWcgPSBmYWxzZQogIFtwbHVnaW5zLnNjaGVkdWxlcl0KICAgIHBhdXNlX3RocmVzaG9sZCA9IDAuMDIKICAgIGRlbGV0aW9uX3RocmVzaG9sZCA9IDAKICAgIG11dGF0aW9uX3RocmVzaG9sZCA9IDEwMAogICAgc2NoZWR1bGVfZGVsYXkgPSAiMHMiCiAgICBzdGFydHVwX2RlbGF5ID0gIjEwMG1zIgo=" | base64 -d | sudo tee /etc/containerd/config.toml"
	I0507 22:33:48.978583  634245 ssh_runner.go:149] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0507 22:33:48.985597  634245 crio.go:128] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0507 22:33:48.985654  634245 ssh_runner.go:149] Run: sudo modprobe br_netfilter
	I0507 22:33:48.993180  634245 ssh_runner.go:149] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0507 22:33:48.999299  634245 ssh_runner.go:149] Run: sudo systemctl daemon-reload
	I0507 22:33:49.061225  634245 ssh_runner.go:149] Run: sudo systemctl restart containerd
	I0507 22:33:49.132733  634245 start.go:368] Will wait 60s for socket path /run/containerd/containerd.sock
	I0507 22:33:49.132802  634245 ssh_runner.go:149] Run: stat /run/containerd/containerd.sock
	I0507 22:33:49.138104  634245 start.go:393] Will wait 60s for crictl version
	I0507 22:33:49.138162  634245 ssh_runner.go:149] Run: sudo crictl version
	I0507 22:33:49.164855  634245 retry.go:31] will retry after 11.04660288s: Temporary Error: sudo crictl version: Process exited with status 1
	stdout:
	
	stderr:
	time="2021-05-07T22:33:49Z" level=fatal msg="getting the runtime version: rpc error: code = Unknown desc = server is not initialized yet"
	I0507 22:34:00.214633  634245 ssh_runner.go:149] Run: sudo crictl version
	I0507 22:34:00.240982  634245 start.go:402] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  1.4.4
	RuntimeApiVersion:  v1alpha2
	I0507 22:34:00.241043  634245 ssh_runner.go:149] Run: containerd --version
	I0507 22:34:00.264679  634245 out.go:170] * Preparing Kubernetes v1.20.2 on containerd 1.4.4 ...
	I0507 22:34:00.264748  634245 cli_runner.go:115] Run: docker network inspect false-20210507223341-391940 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0507 22:34:00.301900  634245 ssh_runner.go:149] Run: grep 192.168.67.1	host.minikube.internal$ /etc/hosts
	I0507 22:34:00.305103  634245 ssh_runner.go:149] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.67.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0507 22:34:00.314145  634245 localpath.go:92] copying /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/client.crt -> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/profiles/false-20210507223341-391940/client.crt
	I0507 22:34:00.314263  634245 localpath.go:117] copying /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/client.key -> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/profiles/false-20210507223341-391940/client.key
	I0507 22:34:00.314376  634245 preload.go:98] Checking if preload exists for k8s version v1.20.2 and runtime containerd
	I0507 22:34:00.314402  634245 preload.go:106] Found local preload: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v10-v1.20.2-containerd-overlay2-amd64.tar.lz4
	I0507 22:34:00.314435  634245 ssh_runner.go:149] Run: sudo crictl images --output json
	I0507 22:34:00.336029  634245 containerd.go:571] all images are preloaded for containerd runtime.
	I0507 22:34:00.336049  634245 containerd.go:481] Images already preloaded, skipping extraction
	I0507 22:34:00.336091  634245 ssh_runner.go:149] Run: sudo crictl images --output json
	I0507 22:34:00.356941  634245 containerd.go:571] all images are preloaded for containerd runtime.
	I0507 22:34:00.356961  634245 cache_images.go:74] Images are preloaded, skipping loading
	I0507 22:34:00.356999  634245 ssh_runner.go:149] Run: sudo crictl info
	I0507 22:34:00.378057  634245 cni.go:93] Creating CNI manager for "false"
	I0507 22:34:00.378078  634245 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0507 22:34:00.378090  634245 kubeadm.go:153] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.67.2 APIServerPort:8443 KubernetesVersion:v1.20.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:false-20210507223341-391940 NodeName:false-20210507223341-391940 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.67.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NoTaintMaster:true NodeIP:192.168.67.2 CgroupDriver:cgroupfs ClientCAFile:/v
ar/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[]}
	I0507 22:34:00.378222  634245 kubeadm.go:157] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.67.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /run/containerd/containerd.sock
	  name: "false-20210507223341-391940"
	  kubeletExtraArgs:
	    node-ip: 192.168.67.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.67.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	
	I0507 22:34:00.378299  634245 kubeadm.go:901] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///run/containerd/containerd.sock --hostname-override=false-20210507223341-391940 --image-service-endpoint=unix:///run/containerd/containerd.sock --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.67.2 --runtime-request-timeout=15m
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.2 ClusterName:false-20210507223341-391940 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:false NodeIP: NodePort:8443 NodeName:}
	I0507 22:34:00.378342  634245 ssh_runner.go:149] Run: sudo ls /var/lib/minikube/binaries/v1.20.2
	I0507 22:34:00.384607  634245 binaries.go:44] Found k8s binaries, skipping transfer
	I0507 22:34:00.384661  634245 ssh_runner.go:149] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0507 22:34:00.390683  634245 ssh_runner.go:316] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (520 bytes)
	I0507 22:34:00.402157  634245 ssh_runner.go:316] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0507 22:34:00.413347  634245 ssh_runner.go:316] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (1866 bytes)
	I0507 22:34:00.424404  634245 ssh_runner.go:149] Run: grep 192.168.67.2	control-plane.minikube.internal$ /etc/hosts
	I0507 22:34:00.426984  634245 ssh_runner.go:149] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.67.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0507 22:34:00.435125  634245 certs.go:52] Setting up /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/profiles/false-20210507223341-391940 for IP: 192.168.67.2
	I0507 22:34:00.435175  634245 certs.go:171] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/ca.key
	I0507 22:34:00.435200  634245 certs.go:171] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/proxy-client-ca.key
	I0507 22:34:00.435267  634245 certs.go:282] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/profiles/false-20210507223341-391940/client.key
	I0507 22:34:00.435291  634245 certs.go:286] generating minikube signed cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/profiles/false-20210507223341-391940/apiserver.key.c7fa3a9e
	I0507 22:34:00.435306  634245 crypto.go:69] Generating cert /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/profiles/false-20210507223341-391940/apiserver.crt.c7fa3a9e with IP's: [192.168.67.2 10.96.0.1 127.0.0.1 10.0.0.1]
	I0507 22:34:00.683898  634245 crypto.go:157] Writing cert to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/profiles/false-20210507223341-391940/apiserver.crt.c7fa3a9e ...
	I0507 22:34:00.683927  634245 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/profiles/false-20210507223341-391940/apiserver.crt.c7fa3a9e: {Name:mk832f30aaa54d710c053960a14b9c9763ea8855 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0507 22:34:00.684135  634245 crypto.go:165] Writing key to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/profiles/false-20210507223341-391940/apiserver.key.c7fa3a9e ...
	I0507 22:34:00.684155  634245 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/profiles/false-20210507223341-391940/apiserver.key.c7fa3a9e: {Name:mkc25e1e27c1f81f26ea4162f9315965b8db9e7c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0507 22:34:00.684269  634245 certs.go:297] copying /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/profiles/false-20210507223341-391940/apiserver.crt.c7fa3a9e -> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/profiles/false-20210507223341-391940/apiserver.crt
	I0507 22:34:00.684332  634245 certs.go:301] copying /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/profiles/false-20210507223341-391940/apiserver.key.c7fa3a9e -> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/profiles/false-20210507223341-391940/apiserver.key
	I0507 22:34:00.684379  634245 certs.go:286] generating aggregator signed cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/profiles/false-20210507223341-391940/proxy-client.key
	I0507 22:34:00.684388  634245 crypto.go:69] Generating cert /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/profiles/false-20210507223341-391940/proxy-client.crt with IP's: []
	I0507 22:34:00.879310  634245 crypto.go:157] Writing cert to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/profiles/false-20210507223341-391940/proxy-client.crt ...
	I0507 22:34:00.879344  634245 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/profiles/false-20210507223341-391940/proxy-client.crt: {Name:mk26c26b342b31b8cbe560fb625a13c27bc1f21c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0507 22:34:00.879542  634245 crypto.go:165] Writing key to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/profiles/false-20210507223341-391940/proxy-client.key ...
	I0507 22:34:00.879556  634245 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/profiles/false-20210507223341-391940/proxy-client.key: {Name:mk7c148e46579cba18beedf400ac99dc15dc98b2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0507 22:34:00.879749  634245 certs.go:361] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/certs/391940.pem (1338 bytes)
	W0507 22:34:00.879790  634245 certs.go:357] ignoring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/certs/391940_empty.pem, impossibly tiny 0 bytes
	I0507 22:34:00.879802  634245 certs.go:361] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/certs/ca-key.pem (1679 bytes)
	I0507 22:34:00.879830  634245 certs.go:361] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/certs/ca.pem (1078 bytes)
	I0507 22:34:00.879856  634245 certs.go:361] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/certs/cert.pem (1123 bytes)
	I0507 22:34:00.879878  634245 certs.go:361] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/certs/key.pem (1675 bytes)
	I0507 22:34:00.880826  634245 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/profiles/false-20210507223341-391940/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0507 22:34:00.898346  634245 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/profiles/false-20210507223341-391940/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0507 22:34:00.946964  634245 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/profiles/false-20210507223341-391940/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0507 22:34:00.963196  634245 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/profiles/false-20210507223341-391940/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0507 22:34:00.978632  634245 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0507 22:34:00.994038  634245 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0507 22:34:01.009068  634245 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0507 22:34:01.024278  634245 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0507 22:34:01.039382  634245 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/certs/391940.pem --> /usr/share/ca-certificates/391940.pem (1338 bytes)
	I0507 22:34:01.054804  634245 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0507 22:34:01.070027  634245 ssh_runner.go:316] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0507 22:34:01.081077  634245 ssh_runner.go:149] Run: openssl version
	I0507 22:34:01.085457  634245 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/391940.pem && ln -fs /usr/share/ca-certificates/391940.pem /etc/ssl/certs/391940.pem"
	I0507 22:34:01.091954  634245 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/391940.pem
	I0507 22:34:01.094696  634245 certs.go:402] hashing: -rw-r--r-- 1 root root 1338 May  7 21:57 /usr/share/ca-certificates/391940.pem
	I0507 22:34:01.094741  634245 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/391940.pem
	I0507 22:34:01.099139  634245 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/391940.pem /etc/ssl/certs/51391683.0"
	I0507 22:34:01.105589  634245 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0507 22:34:01.112220  634245 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0507 22:34:01.114972  634245 certs.go:402] hashing: -rw-r--r-- 1 root root 1111 May  7 21:50 /usr/share/ca-certificates/minikubeCA.pem
	I0507 22:34:01.115009  634245 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0507 22:34:01.119349  634245 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0507 22:34:01.125976  634245 kubeadm.go:381] StartCluster: {Name:false-20210507223341-391940 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.22@sha256:7cc3a3cb6e51c628d8ede157ad9e1f797e8d22a1b3cedc12d3f1999cb52f962e Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.2 ClusterName:false-20210507223341-391940 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] D
NSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:false NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.20.2 ControlPlane:true Worker:true}] Addons:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:5m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false}
	I0507 22:34:01.126045  634245 cri.go:41] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I0507 22:34:01.126083  634245 ssh_runner.go:149] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0507 22:34:01.148376  634245 cri.go:76] found id: ""
	I0507 22:34:01.148428  634245 ssh_runner.go:149] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0507 22:34:01.154747  634245 ssh_runner.go:149] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0507 22:34:01.161067  634245 kubeadm.go:220] ignoring SystemVerification for kubeadm because of docker driver
	I0507 22:34:01.161115  634245 ssh_runner.go:149] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0507 22:34:01.167336  634245 kubeadm.go:151] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0507 22:34:01.167378  634245 ssh_runner.go:240] Start: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.20.2:$PATH kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	W0507 22:34:25.011836  634245 out.go:424] no arguments passed for "  - Generating certificates and keys ..." - returning raw string
	W0507 22:34:25.011876  634245 out.go:424] no arguments passed for "  - Generating certificates and keys ..." - returning raw string
	I0507 22:34:25.013486  634245 out.go:197]   - Generating certificates and keys ...
	W0507 22:34:25.015044  634245 out.go:424] no arguments passed for "  - Booting up control plane ..." - returning raw string
	W0507 22:34:25.015075  634245 out.go:424] no arguments passed for "  - Booting up control plane ..." - returning raw string
	I0507 22:34:25.016674  634245 out.go:197]   - Booting up control plane ...
	W0507 22:34:25.017665  634245 out.go:424] no arguments passed for "  - Configuring RBAC rules ..." - returning raw string
	W0507 22:34:25.017691  634245 out.go:424] no arguments passed for "  - Configuring RBAC rules ..." - returning raw string
	I0507 22:34:25.019150  634245 out.go:197]   - Configuring RBAC rules ...
	I0507 22:34:25.021525  634245 cni.go:93] Creating CNI manager for "false"
	I0507 22:34:25.021571  634245 ssh_runner.go:149] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0507 22:34:25.021622  634245 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.20.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0507 22:34:25.021648  634245 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.20.2/kubectl label nodes minikube.k8s.io/version=v1.20.0 minikube.k8s.io/commit=c31bd57f93d45726e4bd30607374f8c720e70e95 minikube.k8s.io/name=false-20210507223341-391940 minikube.k8s.io/updated_at=2021_05_07T22_34_25_0700 --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0507 22:34:25.219333  634245 ops.go:34] apiserver oom_adj: -16
	I0507 22:34:25.219382  634245 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.20.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0507 22:34:25.782442  634245 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.20.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0507 22:34:26.282719  634245 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.20.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0507 22:34:26.782854  634245 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.20.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0507 22:34:27.282542  634245 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.20.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0507 22:34:27.782784  634245 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.20.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0507 22:34:28.281876  634245 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.20.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0507 22:34:28.782701  634245 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.20.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0507 22:34:29.282090  634245 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.20.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0507 22:34:29.782892  634245 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.20.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0507 22:34:30.282565  634245 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.20.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0507 22:34:30.782810  634245 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.20.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0507 22:34:31.282916  634245 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.20.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0507 22:34:31.782697  634245 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.20.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0507 22:34:32.281924  634245 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.20.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0507 22:34:32.782535  634245 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.20.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0507 22:34:33.282484  634245 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.20.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0507 22:34:33.782576  634245 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.20.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0507 22:34:34.281868  634245 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.20.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0507 22:34:34.782692  634245 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.20.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0507 22:34:35.282380  634245 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.20.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0507 22:34:35.781966  634245 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.20.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0507 22:34:36.282819  634245 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.20.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0507 22:34:36.782131  634245 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.20.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0507 22:34:37.282111  634245 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.20.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0507 22:34:37.782626  634245 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.20.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0507 22:34:38.282804  634245 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.20.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0507 22:34:38.782257  634245 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.20.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0507 22:34:39.282289  634245 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.20.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0507 22:34:39.782288  634245 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.20.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0507 22:34:40.281934  634245 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.20.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0507 22:34:40.562366  634245 kubeadm.go:977] duration metric: took 15.540789582s to wait for elevateKubeSystemPrivileges.
	I0507 22:34:40.562392  634245 kubeadm.go:383] StartCluster complete in 39.436423341s
	I0507 22:34:40.562415  634245 settings.go:142] acquiring lock: {Name:mkbc12d45ea1a96167acb2e3885011008220fc1e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0507 22:34:40.562517  634245 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/kubeconfig
	I0507 22:34:40.564316  634245 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/kubeconfig: {Name:mk53c460e0a047a0806c95f27e36717b9bf9f789 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0507 22:34:41.081731  634245 kapi.go:244] deployment "coredns" in namespace "kube-system" and context "false-20210507223341-391940" rescaled to 1
	I0507 22:34:41.081788  634245 start.go:201] Will wait 5m0s for node &{Name: IP:192.168.67.2 Port:8443 KubernetesVersion:v1.20.2 ControlPlane:true Worker:true}
	W0507 22:34:41.081819  634245 out.go:424] no arguments passed for "* Verifying Kubernetes components...\n" - returning raw string
	W0507 22:34:41.081835  634245 out.go:424] no arguments passed for "* Verifying Kubernetes components...\n" - returning raw string
	I0507 22:34:41.083643  634245 out.go:170] * Verifying Kubernetes components...
	I0507 22:34:41.081896  634245 addons.go:328] enableAddons start: toEnable=map[], additional=[]
	I0507 22:34:41.083711  634245 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service kubelet
	I0507 22:34:41.083720  634245 addons.go:55] Setting storage-provisioner=true in profile "false-20210507223341-391940"
	I0507 22:34:41.083742  634245 addons.go:131] Setting addon storage-provisioner=true in "false-20210507223341-391940"
	W0507 22:34:41.083749  634245 addons.go:140] addon storage-provisioner should already be in state true
	I0507 22:34:41.083775  634245 host.go:66] Checking if "false-20210507223341-391940" exists ...
	I0507 22:34:41.082063  634245 cache.go:108] acquiring lock: {Name:mk66f3ed174a0fda2e3a4fd9a235ceef9553bc77 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0507 22:34:41.083883  634245 cache.go:116] /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/cache/images/minikube-local-cache-test_functional-20210507215728-391940 exists
	I0507 22:34:41.083907  634245 cache.go:97] cache image "minikube-local-cache-test:functional-20210507215728-391940" -> "/home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/cache/images/minikube-local-cache-test_functional-20210507215728-391940" took 1.854335ms
	I0507 22:34:41.083925  634245 cache.go:81] save to tar file minikube-local-cache-test:functional-20210507215728-391940 -> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/cache/images/minikube-local-cache-test_functional-20210507215728-391940 succeeded
	I0507 22:34:41.083936  634245 cache.go:88] Successfully saved all images to host disk.
	I0507 22:34:41.084153  634245 addons.go:55] Setting default-storageclass=true in profile "false-20210507223341-391940"
	I0507 22:34:41.084176  634245 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "false-20210507223341-391940"
	I0507 22:34:41.084357  634245 cli_runner.go:115] Run: docker container inspect false-20210507223341-391940 --format={{.State.Status}}
	I0507 22:34:41.084378  634245 cli_runner.go:115] Run: docker container inspect false-20210507223341-391940 --format={{.State.Status}}
	I0507 22:34:41.084465  634245 cli_runner.go:115] Run: docker container inspect false-20210507223341-391940 --format={{.State.Status}}
	I0507 22:34:41.103138  634245 node_ready.go:35] waiting up to 5m0s for node "false-20210507223341-391940" to be "Ready" ...
	I0507 22:34:41.107823  634245 node_ready.go:49] node "false-20210507223341-391940" has status "Ready":"True"
	I0507 22:34:41.107847  634245 node_ready.go:38] duration metric: took 4.683133ms waiting for node "false-20210507223341-391940" to be "Ready" ...
	I0507 22:34:41.107858  634245 pod_ready.go:35] extra waiting up to 5m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0507 22:34:41.119754  634245 pod_ready.go:78] waiting up to 5m0s for pod "coredns-74ff55c5b-q8wsb" in "kube-system" namespace to be "Ready" ...
	I0507 22:34:41.141845  634245 out.go:170]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0507 22:34:41.141970  634245 addons.go:261] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0507 22:34:41.141982  634245 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0507 22:34:41.142040  634245 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" false-20210507223341-391940
	I0507 22:34:41.143864  634245 ssh_runner.go:149] Run: sudo crictl images --output json
	I0507 22:34:41.143908  634245 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" false-20210507223341-391940
	I0507 22:34:41.151472  634245 addons.go:131] Setting addon default-storageclass=true in "false-20210507223341-391940"
	W0507 22:34:41.151497  634245 addons.go:140] addon default-storageclass should already be in state true
	I0507 22:34:41.151538  634245 host.go:66] Checking if "false-20210507223341-391940" exists ...
	I0507 22:34:41.152064  634245 cli_runner.go:115] Run: docker container inspect false-20210507223341-391940 --format={{.State.Status}}
	I0507 22:34:41.192759  634245 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33291 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/machines/false-20210507223341-391940/id_rsa Username:docker}
	I0507 22:34:41.193663  634245 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33291 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/machines/false-20210507223341-391940/id_rsa Username:docker}
	I0507 22:34:41.201905  634245 addons.go:261] installing /etc/kubernetes/addons/storageclass.yaml
	I0507 22:34:41.201933  634245 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0507 22:34:41.202002  634245 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" false-20210507223341-391940
	I0507 22:34:41.248456  634245 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33291 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/machines/false-20210507223341-391940/id_rsa Username:docker}
	I0507 22:34:41.289244  634245 ssh_runner.go:149] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0507 22:34:41.306206  634245 containerd.go:567] couldn't find preloaded image for "docker.io/minikube-local-cache-test:functional-20210507215728-391940". assuming images are not preloaded.
	I0507 22:34:41.306232  634245 cache_images.go:78] LoadImages start: [minikube-local-cache-test:functional-20210507215728-391940]
	I0507 22:34:41.306296  634245 image.go:320] retrieving image: minikube-local-cache-test:functional-20210507215728-391940
	I0507 22:34:41.306340  634245 image.go:326] checking repository: index.docker.io/library/minikube-local-cache-test
	I0507 22:34:41.343566  634245 ssh_runner.go:149] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	W0507 22:34:41.545836  634245 image.go:333] remote: HEAD https://index.docker.io/v2/library/minikube-local-cache-test/manifests/functional-20210507215728-391940: unexpected status code 401 Unauthorized (HEAD responses have no body, use GET for details)
	I0507 22:34:41.545883  634245 image.go:334] short name: minikube-local-cache-test:functional-20210507215728-391940
	I0507 22:34:41.546940  634245 image.go:362] daemon lookup for minikube-local-cache-test:functional-20210507215728-391940: Error response from daemon: reference does not exist
	W0507 22:34:41.700674  634245 image.go:372] authn lookup for minikube-local-cache-test:functional-20210507215728-391940 (trying anon): GET https://index.docker.io/v2/library/minikube-local-cache-test/manifests/functional-20210507215728-391940: UNAUTHORIZED: authentication required; [map[Action:pull Class: Name:library/minikube-local-cache-test Type:repository]]
	I0507 22:34:41.770255  634245 out.go:170] * Enabled addons: storage-provisioner, default-storageclass
	I0507 22:34:41.770288  634245 addons.go:330] enableAddons completed in 688.400381ms
	I0507 22:34:41.847381  634245 image.go:376] remote lookup for minikube-local-cache-test:functional-20210507215728-391940: GET https://index.docker.io/v2/library/minikube-local-cache-test/manifests/functional-20210507215728-391940: UNAUTHORIZED: authentication required; [map[Action:pull Class: Name:library/minikube-local-cache-test Type:repository]]
	I0507 22:34:41.847431  634245 image.go:98] error retrieve Image minikube-local-cache-test:functional-20210507215728-391940 ref GET https://index.docker.io/v2/library/minikube-local-cache-test/manifests/functional-20210507215728-391940: UNAUTHORIZED: authentication required; [map[Action:pull Class: Name:library/minikube-local-cache-test Type:repository]] 
	I0507 22:34:41.847467  634245 cache_images.go:106] "minikube-local-cache-test:functional-20210507215728-391940" needs transfer: got empty img digest "" for minikube-local-cache-test:functional-20210507215728-391940
	I0507 22:34:41.847488  634245 cache_images.go:271] Loading image from: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/cache/images/minikube-local-cache-test_functional-20210507215728-391940
	I0507 22:34:41.847633  634245 ssh_runner.go:149] Run: stat -c "%s %y" /var/lib/minikube/images/minikube-local-cache-test_functional-20210507215728-391940
	I0507 22:34:41.851020  634245 ssh_runner.go:306] existence check for /var/lib/minikube/images/minikube-local-cache-test_functional-20210507215728-391940: stat -c "%s %y" /var/lib/minikube/images/minikube-local-cache-test_functional-20210507215728-391940: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot stat '/var/lib/minikube/images/minikube-local-cache-test_functional-20210507215728-391940': No such file or directory
	I0507 22:34:41.851054  634245 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/cache/images/minikube-local-cache-test_functional-20210507215728-391940 --> /var/lib/minikube/images/minikube-local-cache-test_functional-20210507215728-391940 (5120 bytes)
	I0507 22:34:41.868164  634245 containerd.go:267] Loading image: /var/lib/minikube/images/minikube-local-cache-test_functional-20210507215728-391940
	I0507 22:34:41.868205  634245 ssh_runner.go:149] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/minikube-local-cache-test_functional-20210507215728-391940
	I0507 22:34:41.985873  634245 cache_images.go:293] Transferred and loaded /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/cache/images/minikube-local-cache-test_functional-20210507215728-391940 from cache
	I0507 22:34:41.985907  634245 cache_images.go:113] Successfully loaded all cached images
	I0507 22:34:41.985916  634245 cache_images.go:82] LoadImages completed in 679.673608ms
	I0507 22:34:41.985928  634245 cache_images.go:252] succeeded pushing to: false-20210507223341-391940
	I0507 22:34:41.985933  634245 cache_images.go:253] failed pushing to: 
	I0507 22:34:43.135548  634245 pod_ready.go:102] pod "coredns-74ff55c5b-q8wsb" in "kube-system" namespace has status "Ready":"False"
	I0507 22:34:45.135601  634245 pod_ready.go:102] pod "coredns-74ff55c5b-q8wsb" in "kube-system" namespace has status "Ready":"False"
	I0507 22:34:47.135694  634245 pod_ready.go:102] pod "coredns-74ff55c5b-q8wsb" in "kube-system" namespace has status "Ready":"False"
	I0507 22:34:49.135952  634245 pod_ready.go:102] pod "coredns-74ff55c5b-q8wsb" in "kube-system" namespace has status "Ready":"False"
	I0507 22:34:51.635600  634245 pod_ready.go:102] pod "coredns-74ff55c5b-q8wsb" in "kube-system" namespace has status "Ready":"False"
	I0507 22:34:54.136548  634245 pod_ready.go:102] pod "coredns-74ff55c5b-q8wsb" in "kube-system" namespace has status "Ready":"False"
	I0507 22:34:56.635856  634245 pod_ready.go:102] pod "coredns-74ff55c5b-q8wsb" in "kube-system" namespace has status "Ready":"False"
	I0507 22:34:59.135635  634245 pod_ready.go:102] pod "coredns-74ff55c5b-q8wsb" in "kube-system" namespace has status "Ready":"False"
	I0507 22:35:01.135782  634245 pod_ready.go:102] pod "coredns-74ff55c5b-q8wsb" in "kube-system" namespace has status "Ready":"False"
	I0507 22:35:03.634948  634245 pod_ready.go:102] pod "coredns-74ff55c5b-q8wsb" in "kube-system" namespace has status "Ready":"False"
	I0507 22:35:06.135075  634245 pod_ready.go:102] pod "coredns-74ff55c5b-q8wsb" in "kube-system" namespace has status "Ready":"False"
	I0507 22:35:08.135623  634245 pod_ready.go:102] pod "coredns-74ff55c5b-q8wsb" in "kube-system" namespace has status "Ready":"False"
	I0507 22:35:10.135862  634245 pod_ready.go:102] pod "coredns-74ff55c5b-q8wsb" in "kube-system" namespace has status "Ready":"False"
	I0507 22:35:12.634936  634245 pod_ready.go:102] pod "coredns-74ff55c5b-q8wsb" in "kube-system" namespace has status "Ready":"False"
	I0507 22:35:14.635724  634245 pod_ready.go:102] pod "coredns-74ff55c5b-q8wsb" in "kube-system" namespace has status "Ready":"False"
	I0507 22:35:17.135930  634245 pod_ready.go:102] pod "coredns-74ff55c5b-q8wsb" in "kube-system" namespace has status "Ready":"False"
	I0507 22:35:19.136069  634245 pod_ready.go:102] pod "coredns-74ff55c5b-q8wsb" in "kube-system" namespace has status "Ready":"False"
	I0507 22:35:21.634975  634245 pod_ready.go:102] pod "coredns-74ff55c5b-q8wsb" in "kube-system" namespace has status "Ready":"False"
	I0507 22:35:23.635590  634245 pod_ready.go:102] pod "coredns-74ff55c5b-q8wsb" in "kube-system" namespace has status "Ready":"False"
	I0507 22:35:25.636221  634245 pod_ready.go:102] pod "coredns-74ff55c5b-q8wsb" in "kube-system" namespace has status "Ready":"False"
	I0507 22:35:27.636268  634245 pod_ready.go:102] pod "coredns-74ff55c5b-q8wsb" in "kube-system" namespace has status "Ready":"False"
	I0507 22:35:30.135517  634245 pod_ready.go:102] pod "coredns-74ff55c5b-q8wsb" in "kube-system" namespace has status "Ready":"False"
	I0507 22:35:35.748122  634245 pod_ready.go:102] pod "coredns-74ff55c5b-q8wsb" in "kube-system" namespace has status "Ready":"False"
	I0507 22:35:38.136063  634245 pod_ready.go:102] pod "coredns-74ff55c5b-q8wsb" in "kube-system" namespace has status "Ready":"False"
	I0507 22:35:40.635542  634245 pod_ready.go:102] pod "coredns-74ff55c5b-q8wsb" in "kube-system" namespace has status "Ready":"False"
	I0507 22:35:42.635670  634245 pod_ready.go:102] pod "coredns-74ff55c5b-q8wsb" in "kube-system" namespace has status "Ready":"False"
	I0507 22:35:45.163965  634245 pod_ready.go:102] pod "coredns-74ff55c5b-q8wsb" in "kube-system" namespace has status "Ready":"False"
	I0507 22:35:47.635873  634245 pod_ready.go:102] pod "coredns-74ff55c5b-q8wsb" in "kube-system" namespace has status "Ready":"False"
	I0507 22:35:49.636001  634245 pod_ready.go:102] pod "coredns-74ff55c5b-q8wsb" in "kube-system" namespace has status "Ready":"False"
	I0507 22:35:52.135134  634245 pod_ready.go:102] pod "coredns-74ff55c5b-q8wsb" in "kube-system" namespace has status "Ready":"False"
	I0507 22:35:54.139210  634245 pod_ready.go:102] pod "coredns-74ff55c5b-q8wsb" in "kube-system" namespace has status "Ready":"False"
	I0507 22:35:56.636258  634245 pod_ready.go:102] pod "coredns-74ff55c5b-q8wsb" in "kube-system" namespace has status "Ready":"False"
	I0507 22:35:59.135955  634245 pod_ready.go:102] pod "coredns-74ff55c5b-q8wsb" in "kube-system" namespace has status "Ready":"False"
	I0507 22:36:01.635635  634245 pod_ready.go:102] pod "coredns-74ff55c5b-q8wsb" in "kube-system" namespace has status "Ready":"False"
	I0507 22:36:03.636998  634245 pod_ready.go:102] pod "coredns-74ff55c5b-q8wsb" in "kube-system" namespace has status "Ready":"False"
	I0507 22:36:06.136483  634245 pod_ready.go:102] pod "coredns-74ff55c5b-q8wsb" in "kube-system" namespace has status "Ready":"False"
	I0507 22:36:08.636177  634245 pod_ready.go:102] pod "coredns-74ff55c5b-q8wsb" in "kube-system" namespace has status "Ready":"False"
	I0507 22:36:11.135760  634245 pod_ready.go:102] pod "coredns-74ff55c5b-q8wsb" in "kube-system" namespace has status "Ready":"False"
	I0507 22:36:13.635617  634245 pod_ready.go:102] pod "coredns-74ff55c5b-q8wsb" in "kube-system" namespace has status "Ready":"False"
	I0507 22:36:16.135532  634245 pod_ready.go:102] pod "coredns-74ff55c5b-q8wsb" in "kube-system" namespace has status "Ready":"False"
	I0507 22:36:18.635533  634245 pod_ready.go:102] pod "coredns-74ff55c5b-q8wsb" in "kube-system" namespace has status "Ready":"False"
	I0507 22:36:21.135723  634245 pod_ready.go:102] pod "coredns-74ff55c5b-q8wsb" in "kube-system" namespace has status "Ready":"False"
	I0507 22:36:23.636260  634245 pod_ready.go:102] pod "coredns-74ff55c5b-q8wsb" in "kube-system" namespace has status "Ready":"False"
	I0507 22:36:26.135780  634245 pod_ready.go:102] pod "coredns-74ff55c5b-q8wsb" in "kube-system" namespace has status "Ready":"False"
	I0507 22:36:28.636008  634245 pod_ready.go:102] pod "coredns-74ff55c5b-q8wsb" in "kube-system" namespace has status "Ready":"False"
	I0507 22:36:31.135578  634245 pod_ready.go:102] pod "coredns-74ff55c5b-q8wsb" in "kube-system" namespace has status "Ready":"False"
	I0507 22:36:33.292473  634245 pod_ready.go:102] pod "coredns-74ff55c5b-q8wsb" in "kube-system" namespace has status "Ready":"False"
	I0507 22:36:35.635972  634245 pod_ready.go:102] pod "coredns-74ff55c5b-q8wsb" in "kube-system" namespace has status "Ready":"False"
	I0507 22:36:38.135299  634245 pod_ready.go:102] pod "coredns-74ff55c5b-q8wsb" in "kube-system" namespace has status "Ready":"False"
	I0507 22:36:40.136525  634245 pod_ready.go:102] pod "coredns-74ff55c5b-q8wsb" in "kube-system" namespace has status "Ready":"False"
	I0507 22:36:42.635366  634245 pod_ready.go:102] pod "coredns-74ff55c5b-q8wsb" in "kube-system" namespace has status "Ready":"False"
	I0507 22:36:45.135417  634245 pod_ready.go:102] pod "coredns-74ff55c5b-q8wsb" in "kube-system" namespace has status "Ready":"False"
	I0507 22:36:47.636272  634245 pod_ready.go:102] pod "coredns-74ff55c5b-q8wsb" in "kube-system" namespace has status "Ready":"False"
	I0507 22:36:50.135387  634245 pod_ready.go:102] pod "coredns-74ff55c5b-q8wsb" in "kube-system" namespace has status "Ready":"False"
	I0507 22:36:52.137297  634245 pod_ready.go:102] pod "coredns-74ff55c5b-q8wsb" in "kube-system" namespace has status "Ready":"False"
	I0507 22:36:54.636218  634245 pod_ready.go:102] pod "coredns-74ff55c5b-q8wsb" in "kube-system" namespace has status "Ready":"False"
	I0507 22:36:57.134756  634245 pod_ready.go:102] pod "coredns-74ff55c5b-q8wsb" in "kube-system" namespace has status "Ready":"False"
	I0507 22:36:59.135357  634245 pod_ready.go:102] pod "coredns-74ff55c5b-q8wsb" in "kube-system" namespace has status "Ready":"False"
	I0507 22:37:01.636149  634245 pod_ready.go:102] pod "coredns-74ff55c5b-q8wsb" in "kube-system" namespace has status "Ready":"False"
	I0507 22:37:04.135471  634245 pod_ready.go:102] pod "coredns-74ff55c5b-q8wsb" in "kube-system" namespace has status "Ready":"False"
	I0507 22:37:06.636279  634245 pod_ready.go:102] pod "coredns-74ff55c5b-q8wsb" in "kube-system" namespace has status "Ready":"False"
	I0507 22:37:09.135287  634245 pod_ready.go:102] pod "coredns-74ff55c5b-q8wsb" in "kube-system" namespace has status "Ready":"False"
	I0507 22:37:11.136159  634245 pod_ready.go:102] pod "coredns-74ff55c5b-q8wsb" in "kube-system" namespace has status "Ready":"False"
	I0507 22:37:13.635076  634245 pod_ready.go:102] pod "coredns-74ff55c5b-q8wsb" in "kube-system" namespace has status "Ready":"False"
	I0507 22:37:15.635169  634245 pod_ready.go:102] pod "coredns-74ff55c5b-q8wsb" in "kube-system" namespace has status "Ready":"False"
	I0507 22:37:17.635332  634245 pod_ready.go:102] pod "coredns-74ff55c5b-q8wsb" in "kube-system" namespace has status "Ready":"False"
	I0507 22:37:19.635752  634245 pod_ready.go:102] pod "coredns-74ff55c5b-q8wsb" in "kube-system" namespace has status "Ready":"False"
	I0507 22:37:22.136027  634245 pod_ready.go:102] pod "coredns-74ff55c5b-q8wsb" in "kube-system" namespace has status "Ready":"False"
	I0507 22:37:24.635252  634245 pod_ready.go:102] pod "coredns-74ff55c5b-q8wsb" in "kube-system" namespace has status "Ready":"False"
	I0507 22:37:27.135233  634245 pod_ready.go:102] pod "coredns-74ff55c5b-q8wsb" in "kube-system" namespace has status "Ready":"False"
	I0507 22:37:29.135626  634245 pod_ready.go:102] pod "coredns-74ff55c5b-q8wsb" in "kube-system" namespace has status "Ready":"False"
	I0507 22:37:31.636500  634245 pod_ready.go:102] pod "coredns-74ff55c5b-q8wsb" in "kube-system" namespace has status "Ready":"False"
	I0507 22:37:34.135445  634245 pod_ready.go:102] pod "coredns-74ff55c5b-q8wsb" in "kube-system" namespace has status "Ready":"False"
	I0507 22:37:36.136008  634245 pod_ready.go:102] pod "coredns-74ff55c5b-q8wsb" in "kube-system" namespace has status "Ready":"False"
	I0507 22:37:38.636044  634245 pod_ready.go:102] pod "coredns-74ff55c5b-q8wsb" in "kube-system" namespace has status "Ready":"False"
	I0507 22:37:41.136299  634245 pod_ready.go:102] pod "coredns-74ff55c5b-q8wsb" in "kube-system" namespace has status "Ready":"False"
	I0507 22:37:43.137705  634245 pod_ready.go:102] pod "coredns-74ff55c5b-q8wsb" in "kube-system" namespace has status "Ready":"False"
	I0507 22:37:45.659738  634245 pod_ready.go:102] pod "coredns-74ff55c5b-q8wsb" in "kube-system" namespace has status "Ready":"False"
	I0507 22:37:48.135824  634245 pod_ready.go:102] pod "coredns-74ff55c5b-q8wsb" in "kube-system" namespace has status "Ready":"False"
	I0507 22:37:52.779300  634245 pod_ready.go:102] pod "coredns-74ff55c5b-q8wsb" in "kube-system" namespace has status "Ready":"False"
	I0507 22:37:55.596476  634245 pod_ready.go:102] pod "coredns-74ff55c5b-q8wsb" in "kube-system" namespace has status "Ready":"False"
	I0507 22:37:57.634881  634245 pod_ready.go:102] pod "coredns-74ff55c5b-q8wsb" in "kube-system" namespace has status "Ready":"False"
	I0507 22:37:59.635199  634245 pod_ready.go:102] pod "coredns-74ff55c5b-q8wsb" in "kube-system" namespace has status "Ready":"False"
	I0507 22:38:02.135474  634245 pod_ready.go:102] pod "coredns-74ff55c5b-q8wsb" in "kube-system" namespace has status "Ready":"False"
	I0507 22:38:04.136331  634245 pod_ready.go:102] pod "coredns-74ff55c5b-q8wsb" in "kube-system" namespace has status "Ready":"False"
	I0507 22:38:06.636167  634245 pod_ready.go:102] pod "coredns-74ff55c5b-q8wsb" in "kube-system" namespace has status "Ready":"False"
	I0507 22:38:09.136295  634245 pod_ready.go:102] pod "coredns-74ff55c5b-q8wsb" in "kube-system" namespace has status "Ready":"False"
	I0507 22:38:11.639035  634245 pod_ready.go:102] pod "coredns-74ff55c5b-q8wsb" in "kube-system" namespace has status "Ready":"False"
	I0507 22:38:14.136581  634245 pod_ready.go:102] pod "coredns-74ff55c5b-q8wsb" in "kube-system" namespace has status "Ready":"False"
	I0507 22:38:16.664836  634245 pod_ready.go:102] pod "coredns-74ff55c5b-q8wsb" in "kube-system" namespace has status "Ready":"False"
	I0507 22:38:19.636274  634245 pod_ready.go:102] pod "coredns-74ff55c5b-q8wsb" in "kube-system" namespace has status "Ready":"False"
	I0507 22:38:21.636787  634245 pod_ready.go:102] pod "coredns-74ff55c5b-q8wsb" in "kube-system" namespace has status "Ready":"False"
	I0507 22:38:23.636920  634245 pod_ready.go:102] pod "coredns-74ff55c5b-q8wsb" in "kube-system" namespace has status "Ready":"False"
	I0507 22:38:26.136591  634245 pod_ready.go:102] pod "coredns-74ff55c5b-q8wsb" in "kube-system" namespace has status "Ready":"False"
	I0507 22:38:28.137163  634245 pod_ready.go:102] pod "coredns-74ff55c5b-q8wsb" in "kube-system" namespace has status "Ready":"False"
	I0507 22:38:30.636213  634245 pod_ready.go:102] pod "coredns-74ff55c5b-q8wsb" in "kube-system" namespace has status "Ready":"False"
	I0507 22:38:32.636342  634245 pod_ready.go:102] pod "coredns-74ff55c5b-q8wsb" in "kube-system" namespace has status "Ready":"False"
	I0507 22:38:35.136211  634245 pod_ready.go:102] pod "coredns-74ff55c5b-q8wsb" in "kube-system" namespace has status "Ready":"False"
	I0507 22:38:37.636287  634245 pod_ready.go:102] pod "coredns-74ff55c5b-q8wsb" in "kube-system" namespace has status "Ready":"False"
	I0507 22:38:40.135484  634245 pod_ready.go:102] pod "coredns-74ff55c5b-q8wsb" in "kube-system" namespace has status "Ready":"False"
	I0507 22:38:41.139921  634245 pod_ready.go:81] duration metric: took 4m0.020130729s waiting for pod "coredns-74ff55c5b-q8wsb" in "kube-system" namespace to be "Ready" ...
	E0507 22:38:41.139947  634245 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting for the condition
	I0507 22:38:41.139956  634245 pod_ready.go:78] waiting up to 5m0s for pod "coredns-74ff55c5b-tzwx6" in "kube-system" namespace to be "Ready" ...
	I0507 22:38:41.141858  634245 pod_ready.go:97] error getting pod "coredns-74ff55c5b-tzwx6" in "kube-system" namespace (skipping!): pods "coredns-74ff55c5b-tzwx6" not found
	I0507 22:38:41.141879  634245 pod_ready.go:81] duration metric: took 1.916082ms waiting for pod "coredns-74ff55c5b-tzwx6" in "kube-system" namespace to be "Ready" ...
	E0507 22:38:41.141891  634245 pod_ready.go:66] WaitExtra: waitPodCondition: error getting pod "coredns-74ff55c5b-tzwx6" in "kube-system" namespace (skipping!): pods "coredns-74ff55c5b-tzwx6" not found
	I0507 22:38:41.141897  634245 pod_ready.go:78] waiting up to 5m0s for pod "etcd-false-20210507223341-391940" in "kube-system" namespace to be "Ready" ...
	I0507 22:38:41.145989  634245 pod_ready.go:92] pod "etcd-false-20210507223341-391940" in "kube-system" namespace has status "Ready":"True"
	I0507 22:38:41.146009  634245 pod_ready.go:81] duration metric: took 4.104794ms waiting for pod "etcd-false-20210507223341-391940" in "kube-system" namespace to be "Ready" ...
	I0507 22:38:41.146022  634245 pod_ready.go:78] waiting up to 5m0s for pod "kube-apiserver-false-20210507223341-391940" in "kube-system" namespace to be "Ready" ...
	I0507 22:38:41.150423  634245 pod_ready.go:92] pod "kube-apiserver-false-20210507223341-391940" in "kube-system" namespace has status "Ready":"True"
	I0507 22:38:41.150448  634245 pod_ready.go:81] duration metric: took 4.417403ms waiting for pod "kube-apiserver-false-20210507223341-391940" in "kube-system" namespace to be "Ready" ...
	I0507 22:38:41.150460  634245 pod_ready.go:78] waiting up to 5m0s for pod "kube-controller-manager-false-20210507223341-391940" in "kube-system" namespace to be "Ready" ...
	I0507 22:38:41.333279  634245 pod_ready.go:92] pod "kube-controller-manager-false-20210507223341-391940" in "kube-system" namespace has status "Ready":"True"
	I0507 22:38:41.333302  634245 pod_ready.go:81] duration metric: took 182.833286ms waiting for pod "kube-controller-manager-false-20210507223341-391940" in "kube-system" namespace to be "Ready" ...
	I0507 22:38:41.333317  634245 pod_ready.go:78] waiting up to 5m0s for pod "kube-proxy-bmhxt" in "kube-system" namespace to be "Ready" ...
	I0507 22:38:41.733691  634245 pod_ready.go:92] pod "kube-proxy-bmhxt" in "kube-system" namespace has status "Ready":"True"
	I0507 22:38:41.733713  634245 pod_ready.go:81] duration metric: took 400.386903ms waiting for pod "kube-proxy-bmhxt" in "kube-system" namespace to be "Ready" ...
	I0507 22:38:41.733726  634245 pod_ready.go:78] waiting up to 5m0s for pod "kube-scheduler-false-20210507223341-391940" in "kube-system" namespace to be "Ready" ...
	I0507 22:38:42.134225  634245 pod_ready.go:92] pod "kube-scheduler-false-20210507223341-391940" in "kube-system" namespace has status "Ready":"True"
	I0507 22:38:42.134254  634245 pod_ready.go:81] duration metric: took 400.518446ms waiting for pod "kube-scheduler-false-20210507223341-391940" in "kube-system" namespace to be "Ready" ...
	I0507 22:38:42.134266  634245 pod_ready.go:38] duration metric: took 4m1.026392157s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0507 22:38:42.134291  634245 api_server.go:50] waiting for apiserver process to appear ...
	I0507 22:38:42.134322  634245 cri.go:41] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0507 22:38:42.134385  634245 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0507 22:38:42.162111  634245 cri.go:76] found id: "a6bffe1f7c2d33d79a1f6208aa698f88c8090c7cb95f32586e1c3e451131814a"
	I0507 22:38:42.162141  634245 cri.go:76] found id: ""
	I0507 22:38:42.162149  634245 logs.go:270] 1 containers: [a6bffe1f7c2d33d79a1f6208aa698f88c8090c7cb95f32586e1c3e451131814a]
	I0507 22:38:42.162200  634245 ssh_runner.go:149] Run: which crictl
	I0507 22:38:42.165473  634245 cri.go:41] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0507 22:38:42.165534  634245 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=etcd
	I0507 22:38:42.189732  634245 cri.go:76] found id: "65b0f048ab917acd8b7defae076f2030e6aa99137e9face4eb533dda95cc20cb"
	I0507 22:38:42.189755  634245 cri.go:76] found id: ""
	I0507 22:38:42.189763  634245 logs.go:270] 1 containers: [65b0f048ab917acd8b7defae076f2030e6aa99137e9face4eb533dda95cc20cb]
	I0507 22:38:42.189812  634245 ssh_runner.go:149] Run: which crictl
	I0507 22:38:42.192824  634245 cri.go:41] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0507 22:38:42.192882  634245 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=coredns
	I0507 22:38:42.213917  634245 cri.go:76] found id: ""
	I0507 22:38:42.213936  634245 logs.go:270] 0 containers: []
	W0507 22:38:42.213943  634245 logs.go:272] No container was found matching "coredns"
	I0507 22:38:42.213949  634245 cri.go:41] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0507 22:38:42.213990  634245 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0507 22:38:42.240297  634245 cri.go:76] found id: "469df8196853fb8b2b194b47e8ce03139b5ed29809e63646d656cef725705dce"
	I0507 22:38:42.240323  634245 cri.go:76] found id: ""
	I0507 22:38:42.240332  634245 logs.go:270] 1 containers: [469df8196853fb8b2b194b47e8ce03139b5ed29809e63646d656cef725705dce]
	I0507 22:38:42.240385  634245 ssh_runner.go:149] Run: which crictl
	I0507 22:38:42.243950  634245 cri.go:41] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0507 22:38:42.244124  634245 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0507 22:38:42.270810  634245 cri.go:76] found id: "313c5cc700f902d29c6df674481483d8ad0a71de3269d40fd5a7b5a302c836d4"
	I0507 22:38:42.270833  634245 cri.go:76] found id: ""
	I0507 22:38:42.270840  634245 logs.go:270] 1 containers: [313c5cc700f902d29c6df674481483d8ad0a71de3269d40fd5a7b5a302c836d4]
	I0507 22:38:42.270902  634245 ssh_runner.go:149] Run: which crictl
	I0507 22:38:42.274293  634245 cri.go:41] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0507 22:38:42.274357  634245 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0507 22:38:42.301269  634245 cri.go:76] found id: ""
	I0507 22:38:42.301300  634245 logs.go:270] 0 containers: []
	W0507 22:38:42.301310  634245 logs.go:272] No container was found matching "kubernetes-dashboard"
	I0507 22:38:42.301319  634245 cri.go:41] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I0507 22:38:42.301388  634245 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0507 22:38:42.324104  634245 cri.go:76] found id: "d14ceb5681dd7e442815398cbe80a86fe32219b787da4ebf8c47f3a0244338e9"
	I0507 22:38:42.324130  634245 cri.go:76] found id: ""
	I0507 22:38:42.324139  634245 logs.go:270] 1 containers: [d14ceb5681dd7e442815398cbe80a86fe32219b787da4ebf8c47f3a0244338e9]
	I0507 22:38:42.324188  634245 ssh_runner.go:149] Run: which crictl
	I0507 22:38:42.326956  634245 cri.go:41] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0507 22:38:42.327020  634245 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0507 22:38:42.353276  634245 cri.go:76] found id: "16a5d9bfbb01a5686500f5141b314de652e264211dcab6ff9f3fb68ba1c45984"
	I0507 22:38:42.353302  634245 cri.go:76] found id: ""
	I0507 22:38:42.353308  634245 logs.go:270] 1 containers: [16a5d9bfbb01a5686500f5141b314de652e264211dcab6ff9f3fb68ba1c45984]
	I0507 22:38:42.353354  634245 ssh_runner.go:149] Run: which crictl
	I0507 22:38:42.356413  634245 logs.go:123] Gathering logs for containerd ...
	I0507 22:38:42.356435  634245 ssh_runner.go:149] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0507 22:38:42.393204  634245 logs.go:123] Gathering logs for kubelet ...
	I0507 22:38:42.393229  634245 ssh_runner.go:149] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0507 22:38:42.469488  634245 logs.go:123] Gathering logs for dmesg ...
	I0507 22:38:42.469522  634245 ssh_runner.go:149] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0507 22:38:42.492577  634245 logs.go:123] Gathering logs for storage-provisioner [d14ceb5681dd7e442815398cbe80a86fe32219b787da4ebf8c47f3a0244338e9] ...
	I0507 22:38:42.492613  634245 ssh_runner.go:149] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d14ceb5681dd7e442815398cbe80a86fe32219b787da4ebf8c47f3a0244338e9"
	I0507 22:38:42.524556  634245 logs.go:123] Gathering logs for kube-scheduler [469df8196853fb8b2b194b47e8ce03139b5ed29809e63646d656cef725705dce] ...
	I0507 22:38:42.524590  634245 ssh_runner.go:149] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 469df8196853fb8b2b194b47e8ce03139b5ed29809e63646d656cef725705dce"
	I0507 22:38:42.562198  634245 logs.go:123] Gathering logs for kube-proxy [313c5cc700f902d29c6df674481483d8ad0a71de3269d40fd5a7b5a302c836d4] ...
	I0507 22:38:42.562230  634245 ssh_runner.go:149] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 313c5cc700f902d29c6df674481483d8ad0a71de3269d40fd5a7b5a302c836d4"
	I0507 22:38:42.612036  634245 logs.go:123] Gathering logs for kube-controller-manager [16a5d9bfbb01a5686500f5141b314de652e264211dcab6ff9f3fb68ba1c45984] ...
	I0507 22:38:42.612076  634245 ssh_runner.go:149] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 16a5d9bfbb01a5686500f5141b314de652e264211dcab6ff9f3fb68ba1c45984"
	I0507 22:38:42.649080  634245 logs.go:123] Gathering logs for container status ...
	I0507 22:38:42.649116  634245 ssh_runner.go:149] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0507 22:38:42.677599  634245 logs.go:123] Gathering logs for describe nodes ...
	I0507 22:38:42.677638  634245 ssh_runner.go:149] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0507 22:38:42.811684  634245 logs.go:123] Gathering logs for kube-apiserver [a6bffe1f7c2d33d79a1f6208aa698f88c8090c7cb95f32586e1c3e451131814a] ...
	I0507 22:38:42.811716  634245 ssh_runner.go:149] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a6bffe1f7c2d33d79a1f6208aa698f88c8090c7cb95f32586e1c3e451131814a"
	I0507 22:38:42.866487  634245 logs.go:123] Gathering logs for etcd [65b0f048ab917acd8b7defae076f2030e6aa99137e9face4eb533dda95cc20cb] ...
	I0507 22:38:42.866523  634245 ssh_runner.go:149] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 65b0f048ab917acd8b7defae076f2030e6aa99137e9face4eb533dda95cc20cb"
	I0507 22:38:45.406134  634245 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0507 22:38:45.426431  634245 api_server.go:70] duration metric: took 4m4.344609735s to wait for apiserver process to appear ...
	I0507 22:38:45.426453  634245 api_server.go:86] waiting for apiserver healthz status ...
	I0507 22:38:45.426478  634245 cri.go:41] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0507 22:38:45.426533  634245 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0507 22:38:45.454367  634245 cri.go:76] found id: "a6bffe1f7c2d33d79a1f6208aa698f88c8090c7cb95f32586e1c3e451131814a"
	I0507 22:38:45.454394  634245 cri.go:76] found id: ""
	I0507 22:38:45.454403  634245 logs.go:270] 1 containers: [a6bffe1f7c2d33d79a1f6208aa698f88c8090c7cb95f32586e1c3e451131814a]
	I0507 22:38:45.454454  634245 ssh_runner.go:149] Run: which crictl
	I0507 22:38:45.457620  634245 cri.go:41] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0507 22:38:45.457679  634245 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=etcd
	I0507 22:38:45.479545  634245 cri.go:76] found id: "65b0f048ab917acd8b7defae076f2030e6aa99137e9face4eb533dda95cc20cb"
	I0507 22:38:45.479568  634245 cri.go:76] found id: ""
	I0507 22:38:45.479576  634245 logs.go:270] 1 containers: [65b0f048ab917acd8b7defae076f2030e6aa99137e9face4eb533dda95cc20cb]
	I0507 22:38:45.479624  634245 ssh_runner.go:149] Run: which crictl
	I0507 22:38:45.483012  634245 cri.go:41] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0507 22:38:45.483073  634245 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=coredns
	I0507 22:38:45.506317  634245 cri.go:76] found id: ""
	I0507 22:38:45.506341  634245 logs.go:270] 0 containers: []
	W0507 22:38:45.506349  634245 logs.go:272] No container was found matching "coredns"
	I0507 22:38:45.506357  634245 cri.go:41] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0507 22:38:45.506414  634245 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0507 22:38:45.527687  634245 cri.go:76] found id: "469df8196853fb8b2b194b47e8ce03139b5ed29809e63646d656cef725705dce"
	I0507 22:38:45.527707  634245 cri.go:76] found id: ""
	I0507 22:38:45.527714  634245 logs.go:270] 1 containers: [469df8196853fb8b2b194b47e8ce03139b5ed29809e63646d656cef725705dce]
	I0507 22:38:45.527761  634245 ssh_runner.go:149] Run: which crictl
	I0507 22:38:45.530519  634245 cri.go:41] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0507 22:38:45.530571  634245 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0507 22:38:45.553316  634245 cri.go:76] found id: "313c5cc700f902d29c6df674481483d8ad0a71de3269d40fd5a7b5a302c836d4"
	I0507 22:38:45.553340  634245 cri.go:76] found id: ""
	I0507 22:38:45.553347  634245 logs.go:270] 1 containers: [313c5cc700f902d29c6df674481483d8ad0a71de3269d40fd5a7b5a302c836d4]
	I0507 22:38:45.553387  634245 ssh_runner.go:149] Run: which crictl
	I0507 22:38:45.556316  634245 cri.go:41] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0507 22:38:45.556371  634245 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0507 22:38:45.583749  634245 cri.go:76] found id: ""
	I0507 22:38:45.583773  634245 logs.go:270] 0 containers: []
	W0507 22:38:45.583780  634245 logs.go:272] No container was found matching "kubernetes-dashboard"
	I0507 22:38:45.583789  634245 cri.go:41] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I0507 22:38:45.583841  634245 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0507 22:38:45.605479  634245 cri.go:76] found id: "d14ceb5681dd7e442815398cbe80a86fe32219b787da4ebf8c47f3a0244338e9"
	I0507 22:38:45.605498  634245 cri.go:76] found id: ""
	I0507 22:38:45.605505  634245 logs.go:270] 1 containers: [d14ceb5681dd7e442815398cbe80a86fe32219b787da4ebf8c47f3a0244338e9]
	I0507 22:38:45.605554  634245 ssh_runner.go:149] Run: which crictl
	I0507 22:38:45.608302  634245 cri.go:41] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0507 22:38:45.608359  634245 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0507 22:38:45.628727  634245 cri.go:76] found id: "16a5d9bfbb01a5686500f5141b314de652e264211dcab6ff9f3fb68ba1c45984"
	I0507 22:38:45.628746  634245 cri.go:76] found id: ""
	I0507 22:38:45.628752  634245 logs.go:270] 1 containers: [16a5d9bfbb01a5686500f5141b314de652e264211dcab6ff9f3fb68ba1c45984]
	I0507 22:38:45.628787  634245 ssh_runner.go:149] Run: which crictl
	I0507 22:38:45.631382  634245 logs.go:123] Gathering logs for kube-controller-manager [16a5d9bfbb01a5686500f5141b314de652e264211dcab6ff9f3fb68ba1c45984] ...
	I0507 22:38:45.631406  634245 ssh_runner.go:149] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 16a5d9bfbb01a5686500f5141b314de652e264211dcab6ff9f3fb68ba1c45984"
	I0507 22:38:45.668533  634245 logs.go:123] Gathering logs for containerd ...
	I0507 22:38:45.668560  634245 ssh_runner.go:149] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0507 22:38:45.711740  634245 logs.go:123] Gathering logs for container status ...
	I0507 22:38:45.711773  634245 ssh_runner.go:149] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0507 22:38:45.738122  634245 logs.go:123] Gathering logs for dmesg ...
	I0507 22:38:45.738153  634245 ssh_runner.go:149] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0507 22:38:45.764468  634245 logs.go:123] Gathering logs for describe nodes ...
	I0507 22:38:45.764498  634245 ssh_runner.go:149] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0507 22:38:45.865790  634245 logs.go:123] Gathering logs for kube-apiserver [a6bffe1f7c2d33d79a1f6208aa698f88c8090c7cb95f32586e1c3e451131814a] ...
	I0507 22:38:45.865820  634245 ssh_runner.go:149] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a6bffe1f7c2d33d79a1f6208aa698f88c8090c7cb95f32586e1c3e451131814a"
	I0507 22:38:45.925696  634245 logs.go:123] Gathering logs for kube-scheduler [469df8196853fb8b2b194b47e8ce03139b5ed29809e63646d656cef725705dce] ...
	I0507 22:38:45.925733  634245 ssh_runner.go:149] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 469df8196853fb8b2b194b47e8ce03139b5ed29809e63646d656cef725705dce"
	I0507 22:38:45.969456  634245 logs.go:123] Gathering logs for kubelet ...
	I0507 22:38:45.969492  634245 ssh_runner.go:149] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0507 22:38:46.063943  634245 logs.go:123] Gathering logs for etcd [65b0f048ab917acd8b7defae076f2030e6aa99137e9face4eb533dda95cc20cb] ...
	I0507 22:38:46.063988  634245 ssh_runner.go:149] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 65b0f048ab917acd8b7defae076f2030e6aa99137e9face4eb533dda95cc20cb"
	I0507 22:38:46.105268  634245 logs.go:123] Gathering logs for kube-proxy [313c5cc700f902d29c6df674481483d8ad0a71de3269d40fd5a7b5a302c836d4] ...
	I0507 22:38:46.105295  634245 ssh_runner.go:149] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 313c5cc700f902d29c6df674481483d8ad0a71de3269d40fd5a7b5a302c836d4"
	I0507 22:38:46.131406  634245 logs.go:123] Gathering logs for storage-provisioner [d14ceb5681dd7e442815398cbe80a86fe32219b787da4ebf8c47f3a0244338e9] ...
	I0507 22:38:46.131437  634245 ssh_runner.go:149] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d14ceb5681dd7e442815398cbe80a86fe32219b787da4ebf8c47f3a0244338e9"
	I0507 22:38:48.659814  634245 api_server.go:223] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I0507 22:38:48.666075  634245 api_server.go:249] https://192.168.67.2:8443/healthz returned 200:
	ok
	I0507 22:38:48.666934  634245 api_server.go:139] control plane version: v1.20.2
	I0507 22:38:48.666956  634245 api_server.go:129] duration metric: took 3.240497832s to wait for apiserver health ...
	I0507 22:38:48.666966  634245 system_pods.go:43] waiting for kube-system pods to appear ...
	I0507 22:38:48.666994  634245 cri.go:41] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0507 22:38:48.667057  634245 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0507 22:38:48.692526  634245 cri.go:76] found id: "a6bffe1f7c2d33d79a1f6208aa698f88c8090c7cb95f32586e1c3e451131814a"
	I0507 22:38:48.692552  634245 cri.go:76] found id: ""
	I0507 22:38:48.692561  634245 logs.go:270] 1 containers: [a6bffe1f7c2d33d79a1f6208aa698f88c8090c7cb95f32586e1c3e451131814a]
	I0507 22:38:48.692607  634245 ssh_runner.go:149] Run: which crictl
	I0507 22:38:48.695470  634245 cri.go:41] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0507 22:38:48.695533  634245 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=etcd
	I0507 22:38:48.718740  634245 cri.go:76] found id: "65b0f048ab917acd8b7defae076f2030e6aa99137e9face4eb533dda95cc20cb"
	I0507 22:38:48.718764  634245 cri.go:76] found id: ""
	I0507 22:38:48.718773  634245 logs.go:270] 1 containers: [65b0f048ab917acd8b7defae076f2030e6aa99137e9face4eb533dda95cc20cb]
	I0507 22:38:48.718807  634245 ssh_runner.go:149] Run: which crictl
	I0507 22:38:48.721549  634245 cri.go:41] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0507 22:38:48.721606  634245 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=coredns
	I0507 22:38:48.743298  634245 cri.go:76] found id: ""
	I0507 22:38:48.743321  634245 logs.go:270] 0 containers: []
	W0507 22:38:48.743328  634245 logs.go:272] No container was found matching "coredns"
	I0507 22:38:48.743341  634245 cri.go:41] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0507 22:38:48.743393  634245 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0507 22:38:48.768180  634245 cri.go:76] found id: "469df8196853fb8b2b194b47e8ce03139b5ed29809e63646d656cef725705dce"
	I0507 22:38:48.768203  634245 cri.go:76] found id: ""
	I0507 22:38:48.768210  634245 logs.go:270] 1 containers: [469df8196853fb8b2b194b47e8ce03139b5ed29809e63646d656cef725705dce]
	I0507 22:38:48.768265  634245 ssh_runner.go:149] Run: which crictl
	I0507 22:38:48.771177  634245 cri.go:41] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0507 22:38:48.771234  634245 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0507 22:38:48.794995  634245 cri.go:76] found id: "313c5cc700f902d29c6df674481483d8ad0a71de3269d40fd5a7b5a302c836d4"
	I0507 22:38:48.795016  634245 cri.go:76] found id: ""
	I0507 22:38:48.795022  634245 logs.go:270] 1 containers: [313c5cc700f902d29c6df674481483d8ad0a71de3269d40fd5a7b5a302c836d4]
	I0507 22:38:48.795057  634245 ssh_runner.go:149] Run: which crictl
	I0507 22:38:48.797732  634245 cri.go:41] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0507 22:38:48.797779  634245 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0507 22:38:48.818179  634245 cri.go:76] found id: ""
	I0507 22:38:48.818195  634245 logs.go:270] 0 containers: []
	W0507 22:38:48.818200  634245 logs.go:272] No container was found matching "kubernetes-dashboard"
	I0507 22:38:48.818207  634245 cri.go:41] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I0507 22:38:48.818255  634245 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0507 22:38:48.839185  634245 cri.go:76] found id: "d14ceb5681dd7e442815398cbe80a86fe32219b787da4ebf8c47f3a0244338e9"
	I0507 22:38:48.839210  634245 cri.go:76] found id: ""
	I0507 22:38:48.839217  634245 logs.go:270] 1 containers: [d14ceb5681dd7e442815398cbe80a86fe32219b787da4ebf8c47f3a0244338e9]
	I0507 22:38:48.839262  634245 ssh_runner.go:149] Run: which crictl
	I0507 22:38:48.841966  634245 cri.go:41] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0507 22:38:48.842025  634245 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0507 22:38:48.863178  634245 cri.go:76] found id: "16a5d9bfbb01a5686500f5141b314de652e264211dcab6ff9f3fb68ba1c45984"
	I0507 22:38:48.863197  634245 cri.go:76] found id: ""
	I0507 22:38:48.863203  634245 logs.go:270] 1 containers: [16a5d9bfbb01a5686500f5141b314de652e264211dcab6ff9f3fb68ba1c45984]
	I0507 22:38:48.863249  634245 ssh_runner.go:149] Run: which crictl
	I0507 22:38:48.865923  634245 logs.go:123] Gathering logs for dmesg ...
	I0507 22:38:48.865942  634245 ssh_runner.go:149] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0507 22:38:48.885774  634245 logs.go:123] Gathering logs for etcd [65b0f048ab917acd8b7defae076f2030e6aa99137e9face4eb533dda95cc20cb] ...
	I0507 22:38:48.885802  634245 ssh_runner.go:149] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 65b0f048ab917acd8b7defae076f2030e6aa99137e9face4eb533dda95cc20cb"
	I0507 22:38:48.918266  634245 logs.go:123] Gathering logs for kube-scheduler [469df8196853fb8b2b194b47e8ce03139b5ed29809e63646d656cef725705dce] ...
	I0507 22:38:48.918292  634245 ssh_runner.go:149] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 469df8196853fb8b2b194b47e8ce03139b5ed29809e63646d656cef725705dce"
	I0507 22:38:48.946449  634245 logs.go:123] Gathering logs for storage-provisioner [d14ceb5681dd7e442815398cbe80a86fe32219b787da4ebf8c47f3a0244338e9] ...
	I0507 22:38:48.946478  634245 ssh_runner.go:149] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d14ceb5681dd7e442815398cbe80a86fe32219b787da4ebf8c47f3a0244338e9"
	I0507 22:38:48.969380  634245 logs.go:123] Gathering logs for kube-controller-manager [16a5d9bfbb01a5686500f5141b314de652e264211dcab6ff9f3fb68ba1c45984] ...
	I0507 22:38:48.969414  634245 ssh_runner.go:149] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 16a5d9bfbb01a5686500f5141b314de652e264211dcab6ff9f3fb68ba1c45984"
	I0507 22:38:49.002782  634245 logs.go:123] Gathering logs for containerd ...
	I0507 22:38:49.002813  634245 ssh_runner.go:149] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0507 22:38:49.047215  634245 logs.go:123] Gathering logs for container status ...
	I0507 22:38:49.047248  634245 ssh_runner.go:149] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0507 22:38:49.072767  634245 logs.go:123] Gathering logs for kubelet ...
	I0507 22:38:49.072799  634245 ssh_runner.go:149] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0507 22:38:49.154764  634245 logs.go:123] Gathering logs for describe nodes ...
	I0507 22:38:49.154794  634245 ssh_runner.go:149] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0507 22:38:49.245837  634245 logs.go:123] Gathering logs for kube-apiserver [a6bffe1f7c2d33d79a1f6208aa698f88c8090c7cb95f32586e1c3e451131814a] ...
	I0507 22:38:49.245867  634245 ssh_runner.go:149] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a6bffe1f7c2d33d79a1f6208aa698f88c8090c7cb95f32586e1c3e451131814a"
	I0507 22:38:49.308021  634245 logs.go:123] Gathering logs for kube-proxy [313c5cc700f902d29c6df674481483d8ad0a71de3269d40fd5a7b5a302c836d4] ...
	I0507 22:38:49.308064  634245 ssh_runner.go:149] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 313c5cc700f902d29c6df674481483d8ad0a71de3269d40fd5a7b5a302c836d4"
	I0507 22:38:51.837825  634245 system_pods.go:59] 7 kube-system pods found
	I0507 22:38:51.837862  634245 system_pods.go:61] "coredns-74ff55c5b-q8wsb" [88c0b410-63d1-4438-992a-1980770e1223] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0507 22:38:51.837871  634245 system_pods.go:61] "etcd-false-20210507223341-391940" [6fba9fbd-5859-417b-9e85-d597a40c7c4b] Running
	I0507 22:38:51.837879  634245 system_pods.go:61] "kube-apiserver-false-20210507223341-391940" [851185aa-692b-448f-831a-4a398cf32702] Running
	I0507 22:38:51.837887  634245 system_pods.go:61] "kube-controller-manager-false-20210507223341-391940" [7c8927b3-6586-4d81-986f-6113ac0f2ecd] Running
	I0507 22:38:51.837892  634245 system_pods.go:61] "kube-proxy-bmhxt" [b921100b-2d96-4d1c-950a-c7b650409f61] Running
	I0507 22:38:51.837898  634245 system_pods.go:61] "kube-scheduler-false-20210507223341-391940" [4c7d3ce5-4a67-41fb-bb10-9a0602e9e821] Running
	I0507 22:38:51.837903  634245 system_pods.go:61] "storage-provisioner" [edba8333-7a38-44c1-8166-c72a4443974d] Running
	I0507 22:38:51.837909  634245 system_pods.go:74] duration metric: took 3.170936057s to wait for pod list to return data ...
	I0507 22:38:51.837916  634245 default_sa.go:34] waiting for default service account to be created ...
	I0507 22:38:51.840456  634245 default_sa.go:45] found service account: "default"
	I0507 22:38:51.840477  634245 default_sa.go:55] duration metric: took 2.550401ms for default service account to be created ...
	I0507 22:38:51.840486  634245 system_pods.go:116] waiting for k8s-apps to be running ...
	I0507 22:38:51.844079  634245 system_pods.go:86] 7 kube-system pods found
	I0507 22:38:51.844109  634245 system_pods.go:89] "coredns-74ff55c5b-q8wsb" [88c0b410-63d1-4438-992a-1980770e1223] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0507 22:38:51.844118  634245 system_pods.go:89] "etcd-false-20210507223341-391940" [6fba9fbd-5859-417b-9e85-d597a40c7c4b] Running
	I0507 22:38:51.844127  634245 system_pods.go:89] "kube-apiserver-false-20210507223341-391940" [851185aa-692b-448f-831a-4a398cf32702] Running
	I0507 22:38:51.844137  634245 system_pods.go:89] "kube-controller-manager-false-20210507223341-391940" [7c8927b3-6586-4d81-986f-6113ac0f2ecd] Running
	I0507 22:38:51.844146  634245 system_pods.go:89] "kube-proxy-bmhxt" [b921100b-2d96-4d1c-950a-c7b650409f61] Running
	I0507 22:38:51.844150  634245 system_pods.go:89] "kube-scheduler-false-20210507223341-391940" [4c7d3ce5-4a67-41fb-bb10-9a0602e9e821] Running
	I0507 22:38:51.844157  634245 system_pods.go:89] "storage-provisioner" [edba8333-7a38-44c1-8166-c72a4443974d] Running
	I0507 22:38:51.844169  634245 retry.go:31] will retry after 305.063636ms: missing components: kube-dns
	I0507 22:38:52.153973  634245 system_pods.go:86] 7 kube-system pods found
	I0507 22:38:52.154010  634245 system_pods.go:89] "coredns-74ff55c5b-q8wsb" [88c0b410-63d1-4438-992a-1980770e1223] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0507 22:38:52.154021  634245 system_pods.go:89] "etcd-false-20210507223341-391940" [6fba9fbd-5859-417b-9e85-d597a40c7c4b] Running
	I0507 22:38:52.154050  634245 system_pods.go:89] "kube-apiserver-false-20210507223341-391940" [851185aa-692b-448f-831a-4a398cf32702] Running
	I0507 22:38:52.154057  634245 system_pods.go:89] "kube-controller-manager-false-20210507223341-391940" [7c8927b3-6586-4d81-986f-6113ac0f2ecd] Running
	I0507 22:38:52.154063  634245 system_pods.go:89] "kube-proxy-bmhxt" [b921100b-2d96-4d1c-950a-c7b650409f61] Running
	I0507 22:38:52.154070  634245 system_pods.go:89] "kube-scheduler-false-20210507223341-391940" [4c7d3ce5-4a67-41fb-bb10-9a0602e9e821] Running
	I0507 22:38:52.154076  634245 system_pods.go:89] "storage-provisioner" [edba8333-7a38-44c1-8166-c72a4443974d] Running
	I0507 22:38:52.154089  634245 retry.go:31] will retry after 338.212508ms: missing components: kube-dns
	I0507 22:38:52.497823  634245 system_pods.go:86] 7 kube-system pods found
	I0507 22:38:52.497872  634245 system_pods.go:89] "coredns-74ff55c5b-q8wsb" [88c0b410-63d1-4438-992a-1980770e1223] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0507 22:38:52.497882  634245 system_pods.go:89] "etcd-false-20210507223341-391940" [6fba9fbd-5859-417b-9e85-d597a40c7c4b] Running
	I0507 22:38:52.497892  634245 system_pods.go:89] "kube-apiserver-false-20210507223341-391940" [851185aa-692b-448f-831a-4a398cf32702] Running
	I0507 22:38:52.497899  634245 system_pods.go:89] "kube-controller-manager-false-20210507223341-391940" [7c8927b3-6586-4d81-986f-6113ac0f2ecd] Running
	I0507 22:38:52.497914  634245 system_pods.go:89] "kube-proxy-bmhxt" [b921100b-2d96-4d1c-950a-c7b650409f61] Running
	I0507 22:38:52.497922  634245 system_pods.go:89] "kube-scheduler-false-20210507223341-391940" [4c7d3ce5-4a67-41fb-bb10-9a0602e9e821] Running
	I0507 22:38:52.497934  634245 system_pods.go:89] "storage-provisioner" [edba8333-7a38-44c1-8166-c72a4443974d] Running
	I0507 22:38:52.497948  634245 retry.go:31] will retry after 378.459802ms: missing components: kube-dns
	I0507 22:38:52.882652  634245 system_pods.go:86] 7 kube-system pods found
	I0507 22:38:52.882692  634245 system_pods.go:89] "coredns-74ff55c5b-q8wsb" [88c0b410-63d1-4438-992a-1980770e1223] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0507 22:38:52.882702  634245 system_pods.go:89] "etcd-false-20210507223341-391940" [6fba9fbd-5859-417b-9e85-d597a40c7c4b] Running
	I0507 22:38:52.882714  634245 system_pods.go:89] "kube-apiserver-false-20210507223341-391940" [851185aa-692b-448f-831a-4a398cf32702] Running
	I0507 22:38:52.882723  634245 system_pods.go:89] "kube-controller-manager-false-20210507223341-391940" [7c8927b3-6586-4d81-986f-6113ac0f2ecd] Running
	I0507 22:38:52.882729  634245 system_pods.go:89] "kube-proxy-bmhxt" [b921100b-2d96-4d1c-950a-c7b650409f61] Running
	I0507 22:38:52.882735  634245 system_pods.go:89] "kube-scheduler-false-20210507223341-391940" [4c7d3ce5-4a67-41fb-bb10-9a0602e9e821] Running
	I0507 22:38:52.882748  634245 system_pods.go:89] "storage-provisioner" [edba8333-7a38-44c1-8166-c72a4443974d] Running
	I0507 22:38:52.882761  634245 retry.go:31] will retry after 469.882201ms: missing components: kube-dns
	I0507 22:38:56.380031  634245 system_pods.go:86] 7 kube-system pods found
	I0507 22:38:56.380076  634245 system_pods.go:89] "coredns-74ff55c5b-q8wsb" [88c0b410-63d1-4438-992a-1980770e1223] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0507 22:38:56.380085  634245 system_pods.go:89] "etcd-false-20210507223341-391940" [6fba9fbd-5859-417b-9e85-d597a40c7c4b] Running
	I0507 22:38:56.380093  634245 system_pods.go:89] "kube-apiserver-false-20210507223341-391940" [851185aa-692b-448f-831a-4a398cf32702] Running
	I0507 22:38:56.380101  634245 system_pods.go:89] "kube-controller-manager-false-20210507223341-391940" [7c8927b3-6586-4d81-986f-6113ac0f2ecd] Running
	I0507 22:38:56.380107  634245 system_pods.go:89] "kube-proxy-bmhxt" [b921100b-2d96-4d1c-950a-c7b650409f61] Running
	I0507 22:38:56.380118  634245 system_pods.go:89] "kube-scheduler-false-20210507223341-391940" [4c7d3ce5-4a67-41fb-bb10-9a0602e9e821] Running
	I0507 22:38:56.380123  634245 system_pods.go:89] "storage-provisioner" [edba8333-7a38-44c1-8166-c72a4443974d] Running
	I0507 22:38:56.380143  634245 retry.go:31] will retry after 667.365439ms: missing components: kube-dns
	I0507 22:38:59.331749  634245 system_pods.go:86] 7 kube-system pods found
	I0507 22:38:59.734392  634245 system_pods.go:89] "coredns-74ff55c5b-q8wsb" [88c0b410-63d1-4438-992a-1980770e1223] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0507 22:38:59.734409  634245 system_pods.go:89] "etcd-false-20210507223341-391940" [6fba9fbd-5859-417b-9e85-d597a40c7c4b] Running
	I0507 22:38:59.734421  634245 system_pods.go:89] "kube-apiserver-false-20210507223341-391940" [851185aa-692b-448f-831a-4a398cf32702] Running
	I0507 22:38:59.734429  634245 system_pods.go:89] "kube-controller-manager-false-20210507223341-391940" [7c8927b3-6586-4d81-986f-6113ac0f2ecd] Running
	I0507 22:38:59.734437  634245 system_pods.go:89] "kube-proxy-bmhxt" [b921100b-2d96-4d1c-950a-c7b650409f61] Running
	I0507 22:38:59.734444  634245 system_pods.go:89] "kube-scheduler-false-20210507223341-391940" [4c7d3ce5-4a67-41fb-bb10-9a0602e9e821] Running
	I0507 22:38:59.734450  634245 system_pods.go:89] "storage-provisioner" [edba8333-7a38-44c1-8166-c72a4443974d] Running
	I0507 22:38:59.734469  634245 retry.go:31] will retry after 597.243124ms: missing components: kube-dns
	I0507 22:39:00.337811  634245 system_pods.go:86] 7 kube-system pods found
	I0507 22:39:00.337852  634245 system_pods.go:89] "coredns-74ff55c5b-q8wsb" [88c0b410-63d1-4438-992a-1980770e1223] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0507 22:39:00.337862  634245 system_pods.go:89] "etcd-false-20210507223341-391940" [6fba9fbd-5859-417b-9e85-d597a40c7c4b] Running
	I0507 22:39:00.337870  634245 system_pods.go:89] "kube-apiserver-false-20210507223341-391940" [851185aa-692b-448f-831a-4a398cf32702] Running
	I0507 22:39:00.337879  634245 system_pods.go:89] "kube-controller-manager-false-20210507223341-391940" [7c8927b3-6586-4d81-986f-6113ac0f2ecd] Running
	I0507 22:39:00.337885  634245 system_pods.go:89] "kube-proxy-bmhxt" [b921100b-2d96-4d1c-950a-c7b650409f61] Running
	I0507 22:39:00.337892  634245 system_pods.go:89] "kube-scheduler-false-20210507223341-391940" [4c7d3ce5-4a67-41fb-bb10-9a0602e9e821] Running
	I0507 22:39:00.337901  634245 system_pods.go:89] "storage-provisioner" [edba8333-7a38-44c1-8166-c72a4443974d] Running
	I0507 22:39:00.337918  634245 retry.go:31] will retry after 789.889932ms: missing components: kube-dns
	I0507 22:39:01.132625  634245 system_pods.go:86] 7 kube-system pods found
	I0507 22:39:01.132657  634245 system_pods.go:89] "coredns-74ff55c5b-q8wsb" [88c0b410-63d1-4438-992a-1980770e1223] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0507 22:39:01.132663  634245 system_pods.go:89] "etcd-false-20210507223341-391940" [6fba9fbd-5859-417b-9e85-d597a40c7c4b] Running
	I0507 22:39:01.132670  634245 system_pods.go:89] "kube-apiserver-false-20210507223341-391940" [851185aa-692b-448f-831a-4a398cf32702] Running
	I0507 22:39:01.132674  634245 system_pods.go:89] "kube-controller-manager-false-20210507223341-391940" [7c8927b3-6586-4d81-986f-6113ac0f2ecd] Running
	I0507 22:39:01.132678  634245 system_pods.go:89] "kube-proxy-bmhxt" [b921100b-2d96-4d1c-950a-c7b650409f61] Running
	I0507 22:39:01.132682  634245 system_pods.go:89] "kube-scheduler-false-20210507223341-391940" [4c7d3ce5-4a67-41fb-bb10-9a0602e9e821] Running
	I0507 22:39:01.132687  634245 system_pods.go:89] "storage-provisioner" [edba8333-7a38-44c1-8166-c72a4443974d] Running
	I0507 22:39:01.132696  634245 retry.go:31] will retry after 951.868007ms: missing components: kube-dns
	I0507 22:39:02.090519  634245 system_pods.go:86] 7 kube-system pods found
	I0507 22:39:02.090553  634245 system_pods.go:89] "coredns-74ff55c5b-q8wsb" [88c0b410-63d1-4438-992a-1980770e1223] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0507 22:39:02.090559  634245 system_pods.go:89] "etcd-false-20210507223341-391940" [6fba9fbd-5859-417b-9e85-d597a40c7c4b] Running
	I0507 22:39:02.090566  634245 system_pods.go:89] "kube-apiserver-false-20210507223341-391940" [851185aa-692b-448f-831a-4a398cf32702] Running
	I0507 22:39:02.090572  634245 system_pods.go:89] "kube-controller-manager-false-20210507223341-391940" [7c8927b3-6586-4d81-986f-6113ac0f2ecd] Running
	I0507 22:39:02.090578  634245 system_pods.go:89] "kube-proxy-bmhxt" [b921100b-2d96-4d1c-950a-c7b650409f61] Running
	I0507 22:39:02.090584  634245 system_pods.go:89] "kube-scheduler-false-20210507223341-391940" [4c7d3ce5-4a67-41fb-bb10-9a0602e9e821] Running
	I0507 22:39:02.090590  634245 system_pods.go:89] "storage-provisioner" [edba8333-7a38-44c1-8166-c72a4443974d] Running
	I0507 22:39:02.090602  634245 retry.go:31] will retry after 1.341783893s: missing components: kube-dns
	I0507 22:39:03.437904  634245 system_pods.go:86] 7 kube-system pods found
	I0507 22:39:03.437942  634245 system_pods.go:89] "coredns-74ff55c5b-q8wsb" [88c0b410-63d1-4438-992a-1980770e1223] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0507 22:39:03.437951  634245 system_pods.go:89] "etcd-false-20210507223341-391940" [6fba9fbd-5859-417b-9e85-d597a40c7c4b] Running
	I0507 22:39:03.437960  634245 system_pods.go:89] "kube-apiserver-false-20210507223341-391940" [851185aa-692b-448f-831a-4a398cf32702] Running
	I0507 22:39:03.437967  634245 system_pods.go:89] "kube-controller-manager-false-20210507223341-391940" [7c8927b3-6586-4d81-986f-6113ac0f2ecd] Running
	I0507 22:39:03.437975  634245 system_pods.go:89] "kube-proxy-bmhxt" [b921100b-2d96-4d1c-950a-c7b650409f61] Running
	I0507 22:39:03.437983  634245 system_pods.go:89] "kube-scheduler-false-20210507223341-391940" [4c7d3ce5-4a67-41fb-bb10-9a0602e9e821] Running
	I0507 22:39:03.437990  634245 system_pods.go:89] "storage-provisioner" [edba8333-7a38-44c1-8166-c72a4443974d] Running
	I0507 22:39:03.438006  634245 retry.go:31] will retry after 1.876813009s: missing components: kube-dns
	I0507 22:39:05.320060  634245 system_pods.go:86] 7 kube-system pods found
	I0507 22:39:05.320094  634245 system_pods.go:89] "coredns-74ff55c5b-q8wsb" [88c0b410-63d1-4438-992a-1980770e1223] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0507 22:39:05.320103  634245 system_pods.go:89] "etcd-false-20210507223341-391940" [6fba9fbd-5859-417b-9e85-d597a40c7c4b] Running
	I0507 22:39:05.320109  634245 system_pods.go:89] "kube-apiserver-false-20210507223341-391940" [851185aa-692b-448f-831a-4a398cf32702] Running
	I0507 22:39:05.320113  634245 system_pods.go:89] "kube-controller-manager-false-20210507223341-391940" [7c8927b3-6586-4d81-986f-6113ac0f2ecd] Running
	I0507 22:39:05.320117  634245 system_pods.go:89] "kube-proxy-bmhxt" [b921100b-2d96-4d1c-950a-c7b650409f61] Running
	I0507 22:39:05.320129  634245 system_pods.go:89] "kube-scheduler-false-20210507223341-391940" [4c7d3ce5-4a67-41fb-bb10-9a0602e9e821] Running
	I0507 22:39:05.320133  634245 system_pods.go:89] "storage-provisioner" [edba8333-7a38-44c1-8166-c72a4443974d] Running
	I0507 22:39:05.320144  634245 retry.go:31] will retry after 2.6934314s: missing components: kube-dns
	I0507 22:39:08.018241  634245 system_pods.go:86] 7 kube-system pods found
	I0507 22:39:08.018273  634245 system_pods.go:89] "coredns-74ff55c5b-q8wsb" [88c0b410-63d1-4438-992a-1980770e1223] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0507 22:39:08.018279  634245 system_pods.go:89] "etcd-false-20210507223341-391940" [6fba9fbd-5859-417b-9e85-d597a40c7c4b] Running
	I0507 22:39:08.018287  634245 system_pods.go:89] "kube-apiserver-false-20210507223341-391940" [851185aa-692b-448f-831a-4a398cf32702] Running
	I0507 22:39:08.018292  634245 system_pods.go:89] "kube-controller-manager-false-20210507223341-391940" [7c8927b3-6586-4d81-986f-6113ac0f2ecd] Running
	I0507 22:39:08.018296  634245 system_pods.go:89] "kube-proxy-bmhxt" [b921100b-2d96-4d1c-950a-c7b650409f61] Running
	I0507 22:39:08.018300  634245 system_pods.go:89] "kube-scheduler-false-20210507223341-391940" [4c7d3ce5-4a67-41fb-bb10-9a0602e9e821] Running
	I0507 22:39:08.018309  634245 system_pods.go:89] "storage-provisioner" [edba8333-7a38-44c1-8166-c72a4443974d] Running
	I0507 22:39:08.018319  634245 retry.go:31] will retry after 2.494582248s: missing components: kube-dns
	I0507 22:39:10.518023  634245 system_pods.go:86] 7 kube-system pods found
	I0507 22:39:10.518059  634245 system_pods.go:89] "coredns-74ff55c5b-q8wsb" [88c0b410-63d1-4438-992a-1980770e1223] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0507 22:39:10.518066  634245 system_pods.go:89] "etcd-false-20210507223341-391940" [6fba9fbd-5859-417b-9e85-d597a40c7c4b] Running
	I0507 22:39:10.518072  634245 system_pods.go:89] "kube-apiserver-false-20210507223341-391940" [851185aa-692b-448f-831a-4a398cf32702] Running
	I0507 22:39:10.518076  634245 system_pods.go:89] "kube-controller-manager-false-20210507223341-391940" [7c8927b3-6586-4d81-986f-6113ac0f2ecd] Running
	I0507 22:39:10.518081  634245 system_pods.go:89] "kube-proxy-bmhxt" [b921100b-2d96-4d1c-950a-c7b650409f61] Running
	I0507 22:39:10.518086  634245 system_pods.go:89] "kube-scheduler-false-20210507223341-391940" [4c7d3ce5-4a67-41fb-bb10-9a0602e9e821] Running
	I0507 22:39:10.518092  634245 system_pods.go:89] "storage-provisioner" [edba8333-7a38-44c1-8166-c72a4443974d] Running
	I0507 22:39:10.518107  634245 retry.go:31] will retry after 3.420895489s: missing components: kube-dns
	I0507 22:39:13.945189  634245 system_pods.go:86] 7 kube-system pods found
	I0507 22:39:13.945235  634245 system_pods.go:89] "coredns-74ff55c5b-q8wsb" [88c0b410-63d1-4438-992a-1980770e1223] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0507 22:39:13.945244  634245 system_pods.go:89] "etcd-false-20210507223341-391940" [6fba9fbd-5859-417b-9e85-d597a40c7c4b] Running
	I0507 22:39:13.945253  634245 system_pods.go:89] "kube-apiserver-false-20210507223341-391940" [851185aa-692b-448f-831a-4a398cf32702] Running
	I0507 22:39:13.945265  634245 system_pods.go:89] "kube-controller-manager-false-20210507223341-391940" [7c8927b3-6586-4d81-986f-6113ac0f2ecd] Running
	I0507 22:39:13.945286  634245 system_pods.go:89] "kube-proxy-bmhxt" [b921100b-2d96-4d1c-950a-c7b650409f61] Running
	I0507 22:39:13.945299  634245 system_pods.go:89] "kube-scheduler-false-20210507223341-391940" [4c7d3ce5-4a67-41fb-bb10-9a0602e9e821] Running
	I0507 22:39:13.945306  634245 system_pods.go:89] "storage-provisioner" [edba8333-7a38-44c1-8166-c72a4443974d] Running
	I0507 22:39:13.945326  634245 retry.go:31] will retry after 4.133785681s: missing components: kube-dns
	I0507 22:39:18.084855  634245 system_pods.go:86] 7 kube-system pods found
	I0507 22:39:18.084897  634245 system_pods.go:89] "coredns-74ff55c5b-q8wsb" [88c0b410-63d1-4438-992a-1980770e1223] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0507 22:39:18.084908  634245 system_pods.go:89] "etcd-false-20210507223341-391940" [6fba9fbd-5859-417b-9e85-d597a40c7c4b] Running
	I0507 22:39:18.084917  634245 system_pods.go:89] "kube-apiserver-false-20210507223341-391940" [851185aa-692b-448f-831a-4a398cf32702] Running
	I0507 22:39:18.084925  634245 system_pods.go:89] "kube-controller-manager-false-20210507223341-391940" [7c8927b3-6586-4d81-986f-6113ac0f2ecd] Running
	I0507 22:39:18.084933  634245 system_pods.go:89] "kube-proxy-bmhxt" [b921100b-2d96-4d1c-950a-c7b650409f61] Running
	I0507 22:39:18.084941  634245 system_pods.go:89] "kube-scheduler-false-20210507223341-391940" [4c7d3ce5-4a67-41fb-bb10-9a0602e9e821] Running
	I0507 22:39:18.084947  634245 system_pods.go:89] "storage-provisioner" [edba8333-7a38-44c1-8166-c72a4443974d] Running
	I0507 22:39:18.084962  634245 retry.go:31] will retry after 5.595921491s: missing components: kube-dns
	I0507 22:39:23.686289  634245 system_pods.go:86] 7 kube-system pods found
	I0507 22:39:23.686321  634245 system_pods.go:89] "coredns-74ff55c5b-q8wsb" [88c0b410-63d1-4438-992a-1980770e1223] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0507 22:39:23.686329  634245 system_pods.go:89] "etcd-false-20210507223341-391940" [6fba9fbd-5859-417b-9e85-d597a40c7c4b] Running
	I0507 22:39:23.686335  634245 system_pods.go:89] "kube-apiserver-false-20210507223341-391940" [851185aa-692b-448f-831a-4a398cf32702] Running
	I0507 22:39:23.686340  634245 system_pods.go:89] "kube-controller-manager-false-20210507223341-391940" [7c8927b3-6586-4d81-986f-6113ac0f2ecd] Running
	I0507 22:39:23.686344  634245 system_pods.go:89] "kube-proxy-bmhxt" [b921100b-2d96-4d1c-950a-c7b650409f61] Running
	I0507 22:39:23.686348  634245 system_pods.go:89] "kube-scheduler-false-20210507223341-391940" [4c7d3ce5-4a67-41fb-bb10-9a0602e9e821] Running
	I0507 22:39:23.686352  634245 system_pods.go:89] "storage-provisioner" [edba8333-7a38-44c1-8166-c72a4443974d] Running
	I0507 22:39:23.686362  634245 retry.go:31] will retry after 6.3346098s: missing components: kube-dns
	I0507 22:39:30.025780  634245 system_pods.go:86] 7 kube-system pods found
	I0507 22:39:30.025815  634245 system_pods.go:89] "coredns-74ff55c5b-q8wsb" [88c0b410-63d1-4438-992a-1980770e1223] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0507 22:39:30.025821  634245 system_pods.go:89] "etcd-false-20210507223341-391940" [6fba9fbd-5859-417b-9e85-d597a40c7c4b] Running
	I0507 22:39:30.025830  634245 system_pods.go:89] "kube-apiserver-false-20210507223341-391940" [851185aa-692b-448f-831a-4a398cf32702] Running
	I0507 22:39:30.025835  634245 system_pods.go:89] "kube-controller-manager-false-20210507223341-391940" [7c8927b3-6586-4d81-986f-6113ac0f2ecd] Running
	I0507 22:39:30.025841  634245 system_pods.go:89] "kube-proxy-bmhxt" [b921100b-2d96-4d1c-950a-c7b650409f61] Running
	I0507 22:39:30.025845  634245 system_pods.go:89] "kube-scheduler-false-20210507223341-391940" [4c7d3ce5-4a67-41fb-bb10-9a0602e9e821] Running
	I0507 22:39:30.025850  634245 system_pods.go:89] "storage-provisioner" [edba8333-7a38-44c1-8166-c72a4443974d] Running
	I0507 22:39:30.025863  634245 retry.go:31] will retry after 7.962971847s: missing components: kube-dns
	I0507 22:39:37.993000  634245 system_pods.go:86] 7 kube-system pods found
	I0507 22:39:37.993042  634245 system_pods.go:89] "coredns-74ff55c5b-q8wsb" [88c0b410-63d1-4438-992a-1980770e1223] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0507 22:39:37.993056  634245 system_pods.go:89] "etcd-false-20210507223341-391940" [6fba9fbd-5859-417b-9e85-d597a40c7c4b] Running
	I0507 22:39:37.993066  634245 system_pods.go:89] "kube-apiserver-false-20210507223341-391940" [851185aa-692b-448f-831a-4a398cf32702] Running
	I0507 22:39:37.993072  634245 system_pods.go:89] "kube-controller-manager-false-20210507223341-391940" [7c8927b3-6586-4d81-986f-6113ac0f2ecd] Running
	I0507 22:39:37.993078  634245 system_pods.go:89] "kube-proxy-bmhxt" [b921100b-2d96-4d1c-950a-c7b650409f61] Running
	I0507 22:39:37.993083  634245 system_pods.go:89] "kube-scheduler-false-20210507223341-391940" [4c7d3ce5-4a67-41fb-bb10-9a0602e9e821] Running
	I0507 22:39:37.993089  634245 system_pods.go:89] "storage-provisioner" [edba8333-7a38-44c1-8166-c72a4443974d] Running
	I0507 22:39:37.993103  634245 retry.go:31] will retry after 12.096349863s: missing components: kube-dns
	I0507 22:39:50.094308  634245 system_pods.go:86] 7 kube-system pods found
	I0507 22:39:50.094342  634245 system_pods.go:89] "coredns-74ff55c5b-q8wsb" [88c0b410-63d1-4438-992a-1980770e1223] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0507 22:39:50.094349  634245 system_pods.go:89] "etcd-false-20210507223341-391940" [6fba9fbd-5859-417b-9e85-d597a40c7c4b] Running
	I0507 22:39:50.094354  634245 system_pods.go:89] "kube-apiserver-false-20210507223341-391940" [851185aa-692b-448f-831a-4a398cf32702] Running
	I0507 22:39:50.094359  634245 system_pods.go:89] "kube-controller-manager-false-20210507223341-391940" [7c8927b3-6586-4d81-986f-6113ac0f2ecd] Running
	I0507 22:39:50.094363  634245 system_pods.go:89] "kube-proxy-bmhxt" [b921100b-2d96-4d1c-950a-c7b650409f61] Running
	I0507 22:39:50.094367  634245 system_pods.go:89] "kube-scheduler-false-20210507223341-391940" [4c7d3ce5-4a67-41fb-bb10-9a0602e9e821] Running
	I0507 22:39:50.094371  634245 system_pods.go:89] "storage-provisioner" [edba8333-7a38-44c1-8166-c72a4443974d] Running
	I0507 22:39:50.094383  634245 retry.go:31] will retry after 11.924857264s: missing components: kube-dns
	I0507 22:40:02.023877  634245 system_pods.go:86] 7 kube-system pods found
	I0507 22:40:02.023911  634245 system_pods.go:89] "coredns-74ff55c5b-q8wsb" [88c0b410-63d1-4438-992a-1980770e1223] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0507 22:40:02.023917  634245 system_pods.go:89] "etcd-false-20210507223341-391940" [6fba9fbd-5859-417b-9e85-d597a40c7c4b] Running
	I0507 22:40:02.023924  634245 system_pods.go:89] "kube-apiserver-false-20210507223341-391940" [851185aa-692b-448f-831a-4a398cf32702] Running
	I0507 22:40:02.023928  634245 system_pods.go:89] "kube-controller-manager-false-20210507223341-391940" [7c8927b3-6586-4d81-986f-6113ac0f2ecd] Running
	I0507 22:40:02.023932  634245 system_pods.go:89] "kube-proxy-bmhxt" [b921100b-2d96-4d1c-950a-c7b650409f61] Running
	I0507 22:40:02.023936  634245 system_pods.go:89] "kube-scheduler-false-20210507223341-391940" [4c7d3ce5-4a67-41fb-bb10-9a0602e9e821] Running
	I0507 22:40:02.023940  634245 system_pods.go:89] "storage-provisioner" [edba8333-7a38-44c1-8166-c72a4443974d] Running
	I0507 22:40:02.023952  634245 retry.go:31] will retry after 14.772791249s: missing components: kube-dns
	I0507 22:40:16.802625  634245 system_pods.go:86] 7 kube-system pods found
	I0507 22:40:16.802657  634245 system_pods.go:89] "coredns-74ff55c5b-q8wsb" [88c0b410-63d1-4438-992a-1980770e1223] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0507 22:40:16.802663  634245 system_pods.go:89] "etcd-false-20210507223341-391940" [6fba9fbd-5859-417b-9e85-d597a40c7c4b] Running
	I0507 22:40:16.802669  634245 system_pods.go:89] "kube-apiserver-false-20210507223341-391940" [851185aa-692b-448f-831a-4a398cf32702] Running
	I0507 22:40:16.802675  634245 system_pods.go:89] "kube-controller-manager-false-20210507223341-391940" [7c8927b3-6586-4d81-986f-6113ac0f2ecd] Running
	I0507 22:40:16.802679  634245 system_pods.go:89] "kube-proxy-bmhxt" [b921100b-2d96-4d1c-950a-c7b650409f61] Running
	I0507 22:40:16.802683  634245 system_pods.go:89] "kube-scheduler-false-20210507223341-391940" [4c7d3ce5-4a67-41fb-bb10-9a0602e9e821] Running
	I0507 22:40:16.802687  634245 system_pods.go:89] "storage-provisioner" [edba8333-7a38-44c1-8166-c72a4443974d] Running
	I0507 22:40:16.802699  634245 retry.go:31] will retry after 20.175608267s: missing components: kube-dns
	I0507 22:40:37.796522  634245 system_pods.go:86] 7 kube-system pods found
	I0507 22:40:37.796552  634245 system_pods.go:89] "coredns-74ff55c5b-q8wsb" [88c0b410-63d1-4438-992a-1980770e1223] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0507 22:40:37.796558  634245 system_pods.go:89] "etcd-false-20210507223341-391940" [6fba9fbd-5859-417b-9e85-d597a40c7c4b] Running
	I0507 22:40:37.796565  634245 system_pods.go:89] "kube-apiserver-false-20210507223341-391940" [851185aa-692b-448f-831a-4a398cf32702] Running
	I0507 22:40:37.796573  634245 system_pods.go:89] "kube-controller-manager-false-20210507223341-391940" [7c8927b3-6586-4d81-986f-6113ac0f2ecd] Running
	I0507 22:40:37.796579  634245 system_pods.go:89] "kube-proxy-bmhxt" [b921100b-2d96-4d1c-950a-c7b650409f61] Running
	I0507 22:40:37.796585  634245 system_pods.go:89] "kube-scheduler-false-20210507223341-391940" [4c7d3ce5-4a67-41fb-bb10-9a0602e9e821] Running
	I0507 22:40:37.796598  634245 system_pods.go:89] "storage-provisioner" [edba8333-7a38-44c1-8166-c72a4443974d] Running
	I0507 22:40:37.796615  634245 retry.go:31] will retry after 28.062855718s: missing components: kube-dns
	I0507 22:41:05.865244  634245 system_pods.go:86] 7 kube-system pods found
	I0507 22:41:05.865283  634245 system_pods.go:89] "coredns-74ff55c5b-q8wsb" [88c0b410-63d1-4438-992a-1980770e1223] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0507 22:41:05.865291  634245 system_pods.go:89] "etcd-false-20210507223341-391940" [6fba9fbd-5859-417b-9e85-d597a40c7c4b] Running
	I0507 22:41:05.865296  634245 system_pods.go:89] "kube-apiserver-false-20210507223341-391940" [851185aa-692b-448f-831a-4a398cf32702] Running
	I0507 22:41:05.865301  634245 system_pods.go:89] "kube-controller-manager-false-20210507223341-391940" [7c8927b3-6586-4d81-986f-6113ac0f2ecd] Running
	I0507 22:41:05.865306  634245 system_pods.go:89] "kube-proxy-bmhxt" [b921100b-2d96-4d1c-950a-c7b650409f61] Running
	I0507 22:41:05.865310  634245 system_pods.go:89] "kube-scheduler-false-20210507223341-391940" [4c7d3ce5-4a67-41fb-bb10-9a0602e9e821] Running
	I0507 22:41:05.865314  634245 system_pods.go:89] "storage-provisioner" [edba8333-7a38-44c1-8166-c72a4443974d] Running
	I0507 22:41:05.865327  634245 retry.go:31] will retry after 40.022161579s: missing components: kube-dns
	I0507 22:41:45.895251  634245 system_pods.go:86] 7 kube-system pods found
	I0507 22:41:45.895286  634245 system_pods.go:89] "coredns-74ff55c5b-q8wsb" [88c0b410-63d1-4438-992a-1980770e1223] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0507 22:41:45.895294  634245 system_pods.go:89] "etcd-false-20210507223341-391940" [6fba9fbd-5859-417b-9e85-d597a40c7c4b] Running
	I0507 22:41:45.895300  634245 system_pods.go:89] "kube-apiserver-false-20210507223341-391940" [851185aa-692b-448f-831a-4a398cf32702] Running
	I0507 22:41:45.895306  634245 system_pods.go:89] "kube-controller-manager-false-20210507223341-391940" [7c8927b3-6586-4d81-986f-6113ac0f2ecd] Running
	I0507 22:41:45.895313  634245 system_pods.go:89] "kube-proxy-bmhxt" [b921100b-2d96-4d1c-950a-c7b650409f61] Running
	I0507 22:41:45.895319  634245 system_pods.go:89] "kube-scheduler-false-20210507223341-391940" [4c7d3ce5-4a67-41fb-bb10-9a0602e9e821] Running
	I0507 22:41:45.895324  634245 system_pods.go:89] "storage-provisioner" [edba8333-7a38-44c1-8166-c72a4443974d] Running
	I0507 22:41:45.895351  634245 retry.go:31] will retry after 37.970670965s: missing components: kube-dns
	I0507 22:42:23.871426  634245 system_pods.go:86] 7 kube-system pods found
	I0507 22:42:23.871460  634245 system_pods.go:89] "coredns-74ff55c5b-q8wsb" [88c0b410-63d1-4438-992a-1980770e1223] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0507 22:42:23.871466  634245 system_pods.go:89] "etcd-false-20210507223341-391940" [6fba9fbd-5859-417b-9e85-d597a40c7c4b] Running
	I0507 22:42:23.871472  634245 system_pods.go:89] "kube-apiserver-false-20210507223341-391940" [851185aa-692b-448f-831a-4a398cf32702] Running
	I0507 22:42:23.871476  634245 system_pods.go:89] "kube-controller-manager-false-20210507223341-391940" [7c8927b3-6586-4d81-986f-6113ac0f2ecd] Running
	I0507 22:42:23.871481  634245 system_pods.go:89] "kube-proxy-bmhxt" [b921100b-2d96-4d1c-950a-c7b650409f61] Running
	I0507 22:42:23.871485  634245 system_pods.go:89] "kube-scheduler-false-20210507223341-391940" [4c7d3ce5-4a67-41fb-bb10-9a0602e9e821] Running
	I0507 22:42:23.871489  634245 system_pods.go:89] "storage-provisioner" [edba8333-7a38-44c1-8166-c72a4443974d] Running
	I0507 22:42:23.871513  634245 retry.go:31] will retry after 47.568379235s: missing components: kube-dns
	I0507 22:43:11.445319  634245 system_pods.go:86] 7 kube-system pods found
	I0507 22:43:11.445357  634245 system_pods.go:89] "coredns-74ff55c5b-q8wsb" [88c0b410-63d1-4438-992a-1980770e1223] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0507 22:43:11.445365  634245 system_pods.go:89] "etcd-false-20210507223341-391940" [6fba9fbd-5859-417b-9e85-d597a40c7c4b] Running
	I0507 22:43:11.445371  634245 system_pods.go:89] "kube-apiserver-false-20210507223341-391940" [851185aa-692b-448f-831a-4a398cf32702] Running
	I0507 22:43:11.445376  634245 system_pods.go:89] "kube-controller-manager-false-20210507223341-391940" [7c8927b3-6586-4d81-986f-6113ac0f2ecd] Running
	I0507 22:43:11.445380  634245 system_pods.go:89] "kube-proxy-bmhxt" [b921100b-2d96-4d1c-950a-c7b650409f61] Running
	I0507 22:43:11.445384  634245 system_pods.go:89] "kube-scheduler-false-20210507223341-391940" [4c7d3ce5-4a67-41fb-bb10-9a0602e9e821] Running
	I0507 22:43:11.445388  634245 system_pods.go:89] "storage-provisioner" [edba8333-7a38-44c1-8166-c72a4443974d] Running
	I0507 22:43:11.445411  634245 retry.go:31] will retry after 1m7.577191067s: missing components: kube-dns
	I0507 22:44:19.027342  634245 system_pods.go:86] 7 kube-system pods found
	I0507 22:44:19.027380  634245 system_pods.go:89] "coredns-74ff55c5b-q8wsb" [88c0b410-63d1-4438-992a-1980770e1223] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0507 22:44:19.027389  634245 system_pods.go:89] "etcd-false-20210507223341-391940" [6fba9fbd-5859-417b-9e85-d597a40c7c4b] Running
	I0507 22:44:19.027395  634245 system_pods.go:89] "kube-apiserver-false-20210507223341-391940" [851185aa-692b-448f-831a-4a398cf32702] Running
	I0507 22:44:19.027400  634245 system_pods.go:89] "kube-controller-manager-false-20210507223341-391940" [7c8927b3-6586-4d81-986f-6113ac0f2ecd] Running
	I0507 22:44:19.027404  634245 system_pods.go:89] "kube-proxy-bmhxt" [b921100b-2d96-4d1c-950a-c7b650409f61] Running
	I0507 22:44:19.027408  634245 system_pods.go:89] "kube-scheduler-false-20210507223341-391940" [4c7d3ce5-4a67-41fb-bb10-9a0602e9e821] Running
	I0507 22:44:19.027412  634245 system_pods.go:89] "storage-provisioner" [edba8333-7a38-44c1-8166-c72a4443974d] Running
	I0507 22:44:19.030342  634245 out.go:170] 
	W0507 22:44:19.030464  634245 out.go:235] X Exiting due to GUEST_START: wait 5m0s for node: waiting for apps_running: expected k8s-apps: missing components: kube-dns
	X Exiting due to GUEST_START: wait 5m0s for node: waiting for apps_running: expected k8s-apps: missing components: kube-dns
	W0507 22:44:19.030480  634245 out.go:424] no arguments passed for "* \n" - returning raw string
	W0507 22:44:19.030488  634245 out.go:235] * 
	* 
	W0507 22:44:19.030504  634245 out.go:424] no arguments passed for "* If the above advice does not help, please let us know:\n" - returning raw string
	W0507 22:44:19.030511  634245 out.go:424] no arguments passed for "  https://github.com/kubernetes/minikube/issues/new/choose\n\n" - returning raw string
	W0507 22:44:19.030516  634245 out.go:424] no arguments passed for "* Please attach the following file to the GitHub issue:\n" - returning raw string
	W0507 22:44:19.030577  634245 out.go:424] no arguments passed for "* If the above advice does not help, please let us know:\n  https://github.com/kubernetes/minikube/issues/new/choose\n\n* Please attach the following file to the GitHub issue:\n* - /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/logs/lastStart.txt\n\n" - returning raw string
	W0507 22:44:19.032358  634245 out.go:235] ╭────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	╭────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	W0507 22:44:19.032373  634245 out.go:235] │                                                                                                                                                                │
	│                                                                                                                                                                │
	W0507 22:44:19.032378  634245 out.go:235] │    * If the above advice does not help, please let us know:                                                                                                    │
	│    * If the above advice does not help, please let us know:                                                                                                    │
	W0507 22:44:19.032383  634245 out.go:235] │      https://github.com/kubernetes/minikube/issues/new/choose                                                                                                  │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                                                                  │
	W0507 22:44:19.032389  634245 out.go:235] │                                                                                                                                                                │
	│                                                                                                                                                                │
	W0507 22:44:19.032394  634245 out.go:235] │    * Please attach the following file to the GitHub issue:                                                                                                     │
	│    * Please attach the following file to the GitHub issue:                                                                                                     │
	W0507 22:44:19.032399  634245 out.go:235] │    * - /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/logs/lastStart.txt    │
	│    * - /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/logs/lastStart.txt    │
	W0507 22:44:19.032408  634245 out.go:235] │                                                                                                                                                                │
	│                                                                                                                                                                │
	W0507 22:44:19.032412  634245 out.go:235] ╰────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	╰────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	W0507 22:44:19.032420  634245 out.go:235] 
	
	I0507 22:44:19.034301  634245 out.go:170] 

                                                
                                                
** /stderr **
net_test.go:85: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/false/Start (637.46s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:176: (dbg) Run:  kubectl --context auto-20210507223250-391940 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
net_test.go:186: hairpin connection unexpectedly succeeded - misconfigured test?
--- FAIL: TestNetworkPlugins/group/auto/HairPin (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/Start (629.71s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/Start
net_test.go:83: (dbg) Run:  out/minikube-linux-amd64 start -p kubenet-20210507224052-391940 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --network-plugin=kubenet --driver=docker  --container-runtime=containerd
E0507 22:40:59.590886  391940 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/profiles/auto-20210507223250-391940/client.crt: no such file or directory
E0507 22:41:40.551646  391940 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/profiles/auto-20210507223250-391940/client.crt: no such file or directory
E0507 22:41:52.729649  391940 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/profiles/no-preload-20210507222537-391940/client.crt: no such file or directory
E0507 22:41:59.411251  391940 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/profiles/functional-20210507215728-391940/client.crt: no such file or directory
E0507 22:42:09.423314  391940 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/profiles/default-k8s-different-port-20210507222942-391940/client.crt: no such file or directory
E0507 22:42:15.754789  391940 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/profiles/cilium-20210507223455-391940/client.crt: no such file or directory
E0507 22:42:15.760089  391940 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/profiles/cilium-20210507223455-391940/client.crt: no such file or directory
E0507 22:42:15.770281  391940 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/profiles/cilium-20210507223455-391940/client.crt: no such file or directory
E0507 22:42:15.790588  391940 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/profiles/cilium-20210507223455-391940/client.crt: no such file or directory
E0507 22:42:15.830849  391940 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/profiles/cilium-20210507223455-391940/client.crt: no such file or directory
E0507 22:42:15.911176  391940 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/profiles/cilium-20210507223455-391940/client.crt: no such file or directory
E0507 22:42:16.071610  391940 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/profiles/cilium-20210507223455-391940/client.crt: no such file or directory
E0507 22:42:16.392294  391940 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/profiles/cilium-20210507223455-391940/client.crt: no such file or directory
E0507 22:42:17.032706  391940 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/profiles/cilium-20210507223455-391940/client.crt: no such file or directory
E0507 22:42:18.313259  391940 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/profiles/cilium-20210507223455-391940/client.crt: no such file or directory

                                                
                                                
=== CONT  TestNetworkPlugins/group/kubenet/Start
net_test.go:83: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p kubenet-20210507224052-391940 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --network-plugin=kubenet --driver=docker  --container-runtime=containerd: exit status 80 (10m29.68184539s)

                                                
                                                
-- stdout --
	* [kubenet-20210507224052-391940] minikube v1.20.0 on Debian 9.13 (kvm/amd64)
	  - KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/kubeconfig
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube
	  - MINIKUBE_LOCATION=master
	* Using the docker driver based on user configuration
	  - More information: https://docs.docker.com/engine/install/linux-postinstall/#your-kernel-does-not-support-cgroup-swap-limit-capabilities
	* Starting control plane node kubenet-20210507224052-391940 in cluster kubenet-20210507224052-391940
	* Pulling base image ...
	* Creating docker container (CPUs=2, Memory=2048MB) ...
	* Preparing Kubernetes v1.20.2 on containerd 1.4.4 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Configuring RBAC rules ...
	* Verifying Kubernetes components...
	  - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	* Enabled addons: storage-provisioner, default-storageclass
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0507 22:40:52.878518  672811 out.go:291] Setting OutFile to fd 1 ...
	I0507 22:40:52.878673  672811 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0507 22:40:52.878682  672811 out.go:304] Setting ErrFile to fd 2...
	I0507 22:40:52.878685  672811 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0507 22:40:52.878775  672811 root.go:316] Updating PATH: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/bin
	I0507 22:40:52.879029  672811 out.go:298] Setting JSON to false
	I0507 22:40:52.914708  672811 start.go:108] hostinfo: {"hostname":"debian-jenkins-agent-11","uptime":12020,"bootTime":1620415232,"procs":350,"os":"linux","platform":"debian","platformFamily":"debian","platformVersion":"9.13","kernelVersion":"4.9.0-15-amd64","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"c29e0b88-ef83-6765-d2fa-208fdce1af32"}
	I0507 22:40:52.914791  672811 start.go:118] virtualization: kvm guest
	I0507 22:40:52.917552  672811 out.go:170] * [kubenet-20210507224052-391940] minikube v1.20.0 on Debian 9.13 (kvm/amd64)
	I0507 22:40:52.919004  672811 out.go:170]   - KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/kubeconfig
	I0507 22:40:52.920381  672811 out.go:170]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0507 22:40:52.921826  672811 out.go:170]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube
	I0507 22:40:52.923176  672811 out.go:170]   - MINIKUBE_LOCATION=master
	I0507 22:40:52.923813  672811 driver.go:322] Setting default libvirt URI to qemu:///system
	I0507 22:40:52.971346  672811 docker.go:119] docker version: linux-19.03.15
	I0507 22:40:52.971454  672811 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	I0507 22:40:53.057850  672811 info.go:261] docker info: {ID:LQL6:IQEY:TAE3:NG47:ROZQ:WA5O:XM2B:XDCN:3VXZ:7JF3:4DHA:WN5N Containers:5 ContainersRunning:5 ContainersPaused:0 ContainersStopped:0 Images:131 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:false KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:69 OomKillDisable:true NGoroutines:79 SystemTime:2021-05-07 22:40:53.008217117 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:4.9.0-15-amd64 OperatingSystem:Debian GNU/Linux 9 (stretch) OSType:linux Architecture:x86_64 IndexServerAddr
ess:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33742209024 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:debian-jenkins-agent-11 Labels:[] ExperimentalBuild:false ServerVersion:19.03.15 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:269548fa27e0089a8b8278fc4fc781d7f65a939b Expected:269548fa27e0089a8b8278fc4fc781d7f65a939b} RuncCommit:{ID:ff819c7e9184c13b7c2607fe6c30ae19403a7aff Expected:ff819c7e9184c13b7c2607fe6c30ae19403a7aff} InitCommit:{ID:fec3683 Expected:fec3683} SecurityOptions:[name=seccomp,profile=default] ProductLicense: Warni
ngs:[WARNING: No swap limit support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[] Warnings:<nil>}}
	I0507 22:40:53.057941  672811 docker.go:225] overlay module found
	I0507 22:40:53.060186  672811 out.go:170] * Using the docker driver based on user configuration
	I0507 22:40:53.060214  672811 start.go:276] selected driver: docker
	I0507 22:40:53.060222  672811 start.go:718] validating driver "docker" against <nil>
	I0507 22:40:53.060244  672811 start.go:729] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc:}
	W0507 22:40:53.060288  672811 oci.go:119] Your kernel does not support memory limit capabilities or the cgroup is not mounted.
	W0507 22:40:53.060303  672811 out.go:424] no arguments passed for "! Your cgroup does not allow setting memory.\n" - returning raw string
	W0507 22:40:53.060323  672811 out.go:235] ! Your cgroup does not allow setting memory.
	! Your cgroup does not allow setting memory.
	W0507 22:40:53.060334  672811 out.go:424] no arguments passed for "  - More information: https://docs.docker.com/engine/install/linux-postinstall/#your-kernel-does-not-support-cgroup-swap-limit-capabilities\n" - returning raw string
	I0507 22:40:53.061888  672811 out.go:170]   - More information: https://docs.docker.com/engine/install/linux-postinstall/#your-kernel-does-not-support-cgroup-swap-limit-capabilities
	I0507 22:40:53.062981  672811 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	I0507 22:40:53.161855  672811 info.go:261] docker info: {ID:LQL6:IQEY:TAE3:NG47:ROZQ:WA5O:XM2B:XDCN:3VXZ:7JF3:4DHA:WN5N Containers:5 ContainersRunning:5 ContainersPaused:0 ContainersStopped:0 Images:131 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:false KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:69 OomKillDisable:true NGoroutines:79 SystemTime:2021-05-07 22:40:53.100417898 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:4.9.0-15-amd64 OperatingSystem:Debian GNU/Linux 9 (stretch) OSType:linux Architecture:x86_64 IndexServerAddr
ess:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33742209024 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:debian-jenkins-agent-11 Labels:[] ExperimentalBuild:false ServerVersion:19.03.15 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:269548fa27e0089a8b8278fc4fc781d7f65a939b Expected:269548fa27e0089a8b8278fc4fc781d7f65a939b} RuncCommit:{ID:ff819c7e9184c13b7c2607fe6c30ae19403a7aff Expected:ff819c7e9184c13b7c2607fe6c30ae19403a7aff} InitCommit:{ID:fec3683 Expected:fec3683} SecurityOptions:[name=seccomp,profile=default] ProductLicense: Warni
ngs:[WARNING: No swap limit support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[] Warnings:<nil>}}
	I0507 22:40:53.162014  672811 start_flags.go:259] no existing cluster config was found, will generate one from the flags 
	I0507 22:40:53.162237  672811 start_flags.go:733] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0507 22:40:53.162265  672811 cni.go:89] network plugin configured as "kubenet", returning disabled
	I0507 22:40:53.162274  672811 start_flags.go:273] config:
	{Name:kubenet-20210507224052-391940 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.22@sha256:7cc3a3cb6e51c628d8ede157ad9e1f797e8d22a1b3cedc12d3f1999cb52f962e Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.2 ClusterName:kubenet-20210507224052-391940 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISo
cket: NetworkPlugin:kubenet FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:5m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false}
	I0507 22:40:53.165187  672811 out.go:170] * Starting control plane node kubenet-20210507224052-391940 in cluster kubenet-20210507224052-391940
	I0507 22:40:53.165236  672811 cache.go:111] Beginning downloading kic base image for docker with containerd
	W0507 22:40:53.165246  672811 out.go:424] no arguments passed for "* Pulling base image ...\n" - returning raw string
	W0507 22:40:53.165261  672811 out.go:424] no arguments passed for "* Pulling base image ...\n" - returning raw string
	I0507 22:40:53.166925  672811 out.go:170] * Pulling base image ...
	I0507 22:40:53.166966  672811 preload.go:98] Checking if preload exists for k8s version v1.20.2 and runtime containerd
	I0507 22:40:53.167001  672811 preload.go:106] Found local preload: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v10-v1.20.2-containerd-overlay2-amd64.tar.lz4
	I0507 22:40:53.167016  672811 cache.go:54] Caching tarball of preloaded images
	I0507 22:40:53.167026  672811 image.go:116] Checking for gcr.io/k8s-minikube/kicbase:v0.0.22@sha256:7cc3a3cb6e51c628d8ede157ad9e1f797e8d22a1b3cedc12d3f1999cb52f962e in local cache directory
	I0507 22:40:53.167043  672811 preload.go:132] Found /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v10-v1.20.2-containerd-overlay2-amd64.tar.lz4 in cache, skipping download
	I0507 22:40:53.167054  672811 cache.go:57] Finished verifying existence of preloaded tar for  v1.20.2 on containerd
	I0507 22:40:53.167059  672811 image.go:119] Found gcr.io/k8s-minikube/kicbase:v0.0.22@sha256:7cc3a3cb6e51c628d8ede157ad9e1f797e8d22a1b3cedc12d3f1999cb52f962e in local cache directory, skipping pull
	I0507 22:40:53.167071  672811 cache.go:131] gcr.io/k8s-minikube/kicbase:v0.0.22@sha256:7cc3a3cb6e51c628d8ede157ad9e1f797e8d22a1b3cedc12d3f1999cb52f962e exists in cache, skipping pull
	I0507 22:40:53.167104  672811 image.go:130] Checking for gcr.io/k8s-minikube/kicbase:v0.0.22@sha256:7cc3a3cb6e51c628d8ede157ad9e1f797e8d22a1b3cedc12d3f1999cb52f962e in local docker daemon
	I0507 22:40:53.167176  672811 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/profiles/kubenet-20210507224052-391940/config.json ...
	I0507 22:40:53.167206  672811 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/profiles/kubenet-20210507224052-391940/config.json: {Name:mk6f7d3b17ed614f6ce609cdf1a5d1f675228263 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0507 22:40:53.247777  672811 image.go:134] Found gcr.io/k8s-minikube/kicbase:v0.0.22@sha256:7cc3a3cb6e51c628d8ede157ad9e1f797e8d22a1b3cedc12d3f1999cb52f962e in local docker daemon, skipping pull
	I0507 22:40:53.247803  672811 cache.go:155] gcr.io/k8s-minikube/kicbase:v0.0.22@sha256:7cc3a3cb6e51c628d8ede157ad9e1f797e8d22a1b3cedc12d3f1999cb52f962e exists in daemon, skipping pull
	I0507 22:40:53.247832  672811 cache.go:194] Successfully downloaded all kic artifacts
	I0507 22:40:53.247867  672811 start.go:313] acquiring machines lock for kubenet-20210507224052-391940: {Name:mk343db27c7581f71b72b6b890cfa139aa788b8d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0507 22:40:53.247996  672811 start.go:317] acquired machines lock for "kubenet-20210507224052-391940" in 107.964µs
	I0507 22:40:53.248026  672811 start.go:89] Provisioning new machine with config: &{Name:kubenet-20210507224052-391940 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.22@sha256:7cc3a3cb6e51c628d8ede157ad9e1f797e8d22a1b3cedc12d3f1999cb52f962e Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.2 ClusterName:kubenet-20210507224052-391940 Namespace:default APIServerName:minikubeCA APIServer
Names:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:kubenet FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.2 ControlPlane:true Worker:true}] Addons:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:5m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false} &{Name: IP: Port:8443 KubernetesVersion:v1.20.2 ControlPlane:true Worker:true}
	I0507 22:40:53.248124  672811 start.go:126] createHost starting for "" (driver="docker")
	I0507 22:40:53.250851  672811 out.go:197] * Creating docker container (CPUs=2, Memory=2048MB) ...
	I0507 22:40:53.251111  672811 start.go:160] libmachine.API.Create for "kubenet-20210507224052-391940" (driver="docker")
	I0507 22:40:53.251145  672811 client.go:168] LocalClient.Create starting
	I0507 22:40:53.251244  672811 main.go:128] libmachine: Reading certificate data from /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/certs/ca.pem
	I0507 22:40:53.251275  672811 main.go:128] libmachine: Decoding PEM data...
	I0507 22:40:53.251311  672811 main.go:128] libmachine: Parsing certificate...
	I0507 22:40:53.251453  672811 main.go:128] libmachine: Reading certificate data from /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/certs/cert.pem
	I0507 22:40:53.251479  672811 main.go:128] libmachine: Decoding PEM data...
	I0507 22:40:53.251496  672811 main.go:128] libmachine: Parsing certificate...
	I0507 22:40:53.251894  672811 cli_runner.go:115] Run: docker network inspect kubenet-20210507224052-391940 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0507 22:40:53.291644  672811 cli_runner.go:162] docker network inspect kubenet-20210507224052-391940 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0507 22:40:53.291722  672811 network_create.go:249] running [docker network inspect kubenet-20210507224052-391940] to gather additional debugging logs...
	I0507 22:40:53.291743  672811 cli_runner.go:115] Run: docker network inspect kubenet-20210507224052-391940
	W0507 22:40:53.341497  672811 cli_runner.go:162] docker network inspect kubenet-20210507224052-391940 returned with exit code 1
	I0507 22:40:53.341550  672811 network_create.go:252] error running [docker network inspect kubenet-20210507224052-391940]: docker network inspect kubenet-20210507224052-391940: exit status 1
	stdout:
	[]
	
	stderr:
	Error: No such network: kubenet-20210507224052-391940
	I0507 22:40:53.341581  672811 network_create.go:254] output of [docker network inspect kubenet-20210507224052-391940]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error: No such network: kubenet-20210507224052-391940
	
	** /stderr **
	I0507 22:40:53.342256  672811 cli_runner.go:115] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0507 22:40:53.385054  672811 network.go:215] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName:br-b7a55e9e83b1 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:02:42:be:99:f6:89}}
	I0507 22:40:53.386400  672811 network.go:263] reserving subnet 192.168.58.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[] amended:true}} dirty:map[192.168.58.0:0xc000374028] misses:0}
	I0507 22:40:53.386443  672811 network.go:210] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}}
	I0507 22:40:53.386463  672811 network_create.go:100] attempt to create docker network kubenet-20210507224052-391940 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500 ...
	I0507 22:40:53.386518  672811 cli_runner.go:115] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true kubenet-20210507224052-391940
	I0507 22:40:53.469239  672811 network_create.go:84] docker network kubenet-20210507224052-391940 192.168.58.0/24 created
	I0507 22:40:53.469289  672811 kic.go:106] calculated static IP "192.168.58.2" for the "kubenet-20210507224052-391940" container
	I0507 22:40:53.469371  672811 cli_runner.go:115] Run: docker ps -a --format {{.Names}}
	I0507 22:40:53.510838  672811 cli_runner.go:115] Run: docker volume create kubenet-20210507224052-391940 --label name.minikube.sigs.k8s.io=kubenet-20210507224052-391940 --label created_by.minikube.sigs.k8s.io=true
	I0507 22:40:53.559162  672811 oci.go:102] Successfully created a docker volume kubenet-20210507224052-391940
	I0507 22:40:53.559286  672811 cli_runner.go:115] Run: docker run --rm --name kubenet-20210507224052-391940-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=kubenet-20210507224052-391940 --entrypoint /usr/bin/test -v kubenet-20210507224052-391940:/var gcr.io/k8s-minikube/kicbase:v0.0.22@sha256:7cc3a3cb6e51c628d8ede157ad9e1f797e8d22a1b3cedc12d3f1999cb52f962e -d /var/lib
	I0507 22:40:54.328995  672811 oci.go:106] Successfully prepared a docker volume kubenet-20210507224052-391940
	W0507 22:40:54.329069  672811 oci.go:135] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	W0507 22:40:54.329079  672811 oci.go:119] Your kernel does not support memory limit capabilities or the cgroup is not mounted.
	I0507 22:40:54.329130  672811 cli_runner.go:115] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0507 22:40:54.329143  672811 preload.go:98] Checking if preload exists for k8s version v1.20.2 and runtime containerd
	I0507 22:40:54.329178  672811 preload.go:106] Found local preload: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v10-v1.20.2-containerd-overlay2-amd64.tar.lz4
	I0507 22:40:54.329192  672811 kic.go:179] Starting extracting preloaded images to volume ...
	I0507 22:40:54.329240  672811 cli_runner.go:115] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v10-v1.20.2-containerd-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v kubenet-20210507224052-391940:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.22@sha256:7cc3a3cb6e51c628d8ede157ad9e1f797e8d22a1b3cedc12d3f1999cb52f962e -I lz4 -xf /preloaded.tar -C /extractDir
	I0507 22:40:54.427070  672811 cli_runner.go:115] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname kubenet-20210507224052-391940 --name kubenet-20210507224052-391940 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=kubenet-20210507224052-391940 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=kubenet-20210507224052-391940 --network kubenet-20210507224052-391940 --ip 192.168.58.2 --volume kubenet-20210507224052-391940:/var --security-opt apparmor=unconfined --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase:v0.0.22@sha256:7cc3a3cb6e51c628d8ede157ad9e1f797e8d22a1b3cedc12d3f1999cb52f962e
	I0507 22:40:55.043077  672811 cli_runner.go:115] Run: docker container inspect kubenet-20210507224052-391940 --format={{.State.Running}}
	I0507 22:40:55.107025  672811 cli_runner.go:115] Run: docker container inspect kubenet-20210507224052-391940 --format={{.State.Status}}
	I0507 22:40:55.165720  672811 cli_runner.go:115] Run: docker exec kubenet-20210507224052-391940 stat /var/lib/dpkg/alternatives/iptables
	I0507 22:40:55.317730  672811 oci.go:278] the created container "kubenet-20210507224052-391940" has a running status.
	I0507 22:40:55.317785  672811 kic.go:210] Creating ssh key for kic: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/machines/kubenet-20210507224052-391940/id_rsa...
	I0507 22:40:55.465459  672811 kic_runner.go:188] docker (temp): /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/machines/kubenet-20210507224052-391940/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0507 22:40:55.874845  672811 cli_runner.go:115] Run: docker container inspect kubenet-20210507224052-391940 --format={{.State.Status}}
	I0507 22:40:55.926608  672811 kic_runner.go:94] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0507 22:40:55.926628  672811 kic_runner.go:115] Args: [docker exec --privileged kubenet-20210507224052-391940 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0507 22:40:59.038214  672811 cli_runner.go:168] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v10-v1.20.2-containerd-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v kubenet-20210507224052-391940:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.22@sha256:7cc3a3cb6e51c628d8ede157ad9e1f797e8d22a1b3cedc12d3f1999cb52f962e -I lz4 -xf /preloaded.tar -C /extractDir: (4.708855042s)
	I0507 22:40:59.038245  672811 kic.go:188] duration metric: took 4.709051 seconds to extract preloaded images to volume
	I0507 22:40:59.038321  672811 cli_runner.go:115] Run: docker container inspect kubenet-20210507224052-391940 --format={{.State.Status}}
	I0507 22:40:59.081058  672811 machine.go:88] provisioning docker machine ...
	I0507 22:40:59.081096  672811 ubuntu.go:169] provisioning hostname "kubenet-20210507224052-391940"
	I0507 22:40:59.081153  672811 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubenet-20210507224052-391940
	I0507 22:40:59.119701  672811 main.go:128] libmachine: Using SSH client type: native
	I0507 22:40:59.119896  672811 main.go:128] libmachine: &{{{<nil> 0 [] [] []} docker [0x802720] 0x8026e0 <nil>  [] 0s} 127.0.0.1 33326 <nil> <nil>}
	I0507 22:40:59.119916  672811 main.go:128] libmachine: About to run SSH command:
	sudo hostname kubenet-20210507224052-391940 && echo "kubenet-20210507224052-391940" | sudo tee /etc/hostname
	I0507 22:40:59.251144  672811 main.go:128] libmachine: SSH cmd err, output: <nil>: kubenet-20210507224052-391940
	
	I0507 22:40:59.251212  672811 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubenet-20210507224052-391940
	I0507 22:40:59.290133  672811 main.go:128] libmachine: Using SSH client type: native
	I0507 22:40:59.290316  672811 main.go:128] libmachine: &{{{<nil> 0 [] [] []} docker [0x802720] 0x8026e0 <nil>  [] 0s} 127.0.0.1 33326 <nil> <nil>}
	I0507 22:40:59.290356  672811 main.go:128] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\skubenet-20210507224052-391940' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 kubenet-20210507224052-391940/g' /etc/hosts;
				else 
					echo '127.0.1.1 kubenet-20210507224052-391940' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0507 22:40:59.403817  672811 main.go:128] libmachine: SSH cmd err, output: <nil>: 
	I0507 22:40:59.403851  672811 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube CaCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720
e70e95/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube}
	I0507 22:40:59.403874  672811 ubuntu.go:177] setting up certificates
	I0507 22:40:59.403887  672811 provision.go:83] configureAuth start
	I0507 22:40:59.403966  672811 cli_runner.go:115] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" kubenet-20210507224052-391940
	I0507 22:40:59.447361  672811 provision.go:137] copyHostCerts
	I0507 22:40:59.447423  672811 exec_runner.go:145] found /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/ca.pem, removing ...
	I0507 22:40:59.447435  672811 exec_runner.go:190] rm: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/ca.pem
	I0507 22:40:59.447489  672811 exec_runner.go:152] cp: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/ca.pem (1078 bytes)
	I0507 22:40:59.447657  672811 exec_runner.go:145] found /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/cert.pem, removing ...
	I0507 22:40:59.447677  672811 exec_runner.go:190] rm: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/cert.pem
	I0507 22:40:59.447707  672811 exec_runner.go:152] cp: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/cert.pem (1123 bytes)
	I0507 22:40:59.447795  672811 exec_runner.go:145] found /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/key.pem, removing ...
	I0507 22:40:59.447805  672811 exec_runner.go:190] rm: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/key.pem
	I0507 22:40:59.447843  672811 exec_runner.go:152] cp: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/key.pem (1675 bytes)
	I0507 22:40:59.447895  672811 provision.go:111] generating server cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/certs/ca-key.pem org=jenkins.kubenet-20210507224052-391940 san=[192.168.58.2 127.0.0.1 localhost 127.0.0.1 minikube kubenet-20210507224052-391940]
	I0507 22:40:59.852941  672811 provision.go:165] copyRemoteCerts
	I0507 22:40:59.853012  672811 ssh_runner.go:149] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0507 22:40:59.853074  672811 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubenet-20210507224052-391940
	I0507 22:40:59.896021  672811 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33326 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/machines/kubenet-20210507224052-391940/id_rsa Username:docker}
	I0507 22:40:59.978856  672811 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0507 22:40:59.995226  672811 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/machines/server.pem --> /etc/docker/server.pem (1261 bytes)
	I0507 22:41:00.011913  672811 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0507 22:41:00.027622  672811 provision.go:86] duration metric: configureAuth took 623.719966ms
	I0507 22:41:00.027644  672811 ubuntu.go:193] setting minikube options for container-runtime
	I0507 22:41:00.027808  672811 machine.go:91] provisioned docker machine in 946.729843ms
	I0507 22:41:00.027821  672811 client.go:171] LocalClient.Create took 6.776670216s
	I0507 22:41:00.027841  672811 start.go:168] duration metric: libmachine.API.Create for "kubenet-20210507224052-391940" took 6.776727752s
	I0507 22:41:00.027849  672811 start.go:267] post-start starting for "kubenet-20210507224052-391940" (driver="docker")
	I0507 22:41:00.027855  672811 start.go:277] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0507 22:41:00.027897  672811 ssh_runner.go:149] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0507 22:41:00.027946  672811 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubenet-20210507224052-391940
	I0507 22:41:00.075235  672811 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33326 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/machines/kubenet-20210507224052-391940/id_rsa Username:docker}
	I0507 22:41:00.166719  672811 ssh_runner.go:149] Run: cat /etc/os-release
	I0507 22:41:00.169362  672811 main.go:128] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0507 22:41:00.169391  672811 main.go:128] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0507 22:41:00.169407  672811 main.go:128] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0507 22:41:00.169419  672811 info.go:137] Remote host: Ubuntu 20.04.2 LTS
	I0507 22:41:00.169433  672811 filesync.go:118] Scanning /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/addons for local assets ...
	I0507 22:41:00.169503  672811 filesync.go:118] Scanning /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/files for local assets ...
	I0507 22:41:00.169623  672811 start.go:270] post-start completed in 141.767397ms
	I0507 22:41:00.169915  672811 cli_runner.go:115] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" kubenet-20210507224052-391940
	I0507 22:41:00.210576  672811 profile.go:148] Saving config to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/profiles/kubenet-20210507224052-391940/config.json ...
	I0507 22:41:00.210783  672811 ssh_runner.go:149] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0507 22:41:00.210835  672811 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubenet-20210507224052-391940
	I0507 22:41:00.247709  672811 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33326 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/machines/kubenet-20210507224052-391940/id_rsa Username:docker}
	I0507 22:41:00.327660  672811 start.go:129] duration metric: createHost completed in 7.07952255s
	I0507 22:41:00.327686  672811 start.go:80] releasing machines lock for "kubenet-20210507224052-391940", held for 7.079675771s
	I0507 22:41:00.327754  672811 cli_runner.go:115] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" kubenet-20210507224052-391940
	I0507 22:41:00.367166  672811 ssh_runner.go:149] Run: curl -sS -m 2 https://k8s.gcr.io/
	I0507 22:41:00.367177  672811 ssh_runner.go:149] Run: systemctl --version
	I0507 22:41:00.367228  672811 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubenet-20210507224052-391940
	I0507 22:41:00.367248  672811 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubenet-20210507224052-391940
	I0507 22:41:00.408143  672811 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33326 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/machines/kubenet-20210507224052-391940/id_rsa Username:docker}
	I0507 22:41:00.408527  672811 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33326 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/machines/kubenet-20210507224052-391940/id_rsa Username:docker}
	I0507 22:41:00.487269  672811 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service crio
	I0507 22:41:00.537919  672811 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service docker
	I0507 22:41:00.547201  672811 ssh_runner.go:149] Run: sudo systemctl stop -f docker.socket
	I0507 22:41:00.564793  672811 ssh_runner.go:149] Run: sudo systemctl stop -f docker.service
	I0507 22:41:00.574599  672811 ssh_runner.go:149] Run: sudo systemctl disable docker.socket
	I0507 22:41:00.638969  672811 ssh_runner.go:149] Run: sudo systemctl mask docker.service
	I0507 22:41:00.698972  672811 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service docker
	I0507 22:41:00.709630  672811 ssh_runner.go:149] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	image-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0507 22:41:00.723315  672811 ssh_runner.go:149] Run: /bin/bash -c "sudo mkdir -p /etc/containerd && printf %s "cm9vdCA9ICIvdmFyL2xpYi9jb250YWluZXJkIgpzdGF0ZSA9ICIvcnVuL2NvbnRhaW5lcmQiCm9vbV9zY29yZSA9IDAKCltncnBjXQogIGFkZHJlc3MgPSAiL3J1bi9jb250YWluZXJkL2NvbnRhaW5lcmQuc29jayIKICB1aWQgPSAwCiAgZ2lkID0gMAogIG1heF9yZWN2X21lc3NhZ2Vfc2l6ZSA9IDE2Nzc3MjE2CiAgbWF4X3NlbmRfbWVzc2FnZV9zaXplID0gMTY3NzcyMTYKCltkZWJ1Z10KICBhZGRyZXNzID0gIiIKICB1aWQgPSAwCiAgZ2lkID0gMAogIGxldmVsID0gIiIKClttZXRyaWNzXQogIGFkZHJlc3MgPSAiIgogIGdycGNfaGlzdG9ncmFtID0gZmFsc2UKCltjZ3JvdXBdCiAgcGF0aCA9ICIiCgpbcGx1Z2luc10KICBbcGx1Z2lucy5jZ3JvdXBzXQogICAgbm9fcHJvbWV0aGV1cyA9IGZhbHNlCiAgW3BsdWdpbnMuY3JpXQogICAgc3RyZWFtX3NlcnZlcl9hZGRyZXNzID0gIiIKICAgIHN0cmVhbV9zZXJ2ZXJfcG9ydCA9ICIxMDAxMCIKICAgIGVuYWJsZV9zZWxpbnV4ID0gZmFsc2UKICAgIHNhbmRib3hfaW1hZ2UgPSAiazhzLmdjci5pby9wYXVzZTozLjIiCiAgICBzdGF0c19jb2xsZWN0X3BlcmlvZCA9IDEwCiAgICBzeXN0ZW1kX2Nncm91cCA9IGZhbHNlCiAgICBlbmFibGVfdGxzX3N0cmVhbWluZyA9IGZhbHNlCiAgICBtYXhfY29udGFpbmVyX2xvZ19saW5lX3NpemUgPSAxNjM
4NAogICAgW3BsdWdpbnMuY3JpLmNvbnRhaW5lcmRdCiAgICAgIHNuYXBzaG90dGVyID0gIm92ZXJsYXlmcyIKICAgICAgbm9fcGl2b3QgPSB0cnVlCiAgICAgIFtwbHVnaW5zLmNyaS5jb250YWluZXJkLmRlZmF1bHRfcnVudGltZV0KICAgICAgICBydW50aW1lX3R5cGUgPSAiaW8uY29udGFpbmVyZC5ydW50aW1lLnYxLmxpbnV4IgogICAgICAgIHJ1bnRpbWVfZW5naW5lID0gIiIKICAgICAgICBydW50aW1lX3Jvb3QgPSAiIgogICAgICBbcGx1Z2lucy5jcmkuY29udGFpbmVyZC51bnRydXN0ZWRfd29ya2xvYWRfcnVudGltZV0KICAgICAgICBydW50aW1lX3R5cGUgPSAiIgogICAgICAgIHJ1bnRpbWVfZW5naW5lID0gIiIKICAgICAgICBydW50aW1lX3Jvb3QgPSAiIgogICAgW3BsdWdpbnMuY3JpLmNuaV0KICAgICAgYmluX2RpciA9ICIvb3B0L2NuaS9iaW4iCiAgICAgIGNvbmZfZGlyID0gIi9ldGMvY25pL25ldC5kIgogICAgICBjb25mX3RlbXBsYXRlID0gIiIKICAgIFtwbHVnaW5zLmNyaS5yZWdpc3RyeV0KICAgICAgW3BsdWdpbnMuY3JpLnJlZ2lzdHJ5Lm1pcnJvcnNdCiAgICAgICAgW3BsdWdpbnMuY3JpLnJlZ2lzdHJ5Lm1pcnJvcnMuImRvY2tlci5pbyJdCiAgICAgICAgICBlbmRwb2ludCA9IFsiaHR0cHM6Ly9yZWdpc3RyeS0xLmRvY2tlci5pbyJdCiAgICAgICAgW3BsdWdpbnMuZGlmZi1zZXJ2aWNlXQogICAgZGVmYXVsdCA9IFsid2Fsa2luZyJdCiAgW3BsdWdpbnMubGludXhdCiAgICBzaGltID0gImNvbnRhaW5lcmQtc2hpbSIKICA
gIHJ1bnRpbWUgPSAicnVuYyIKICAgIHJ1bnRpbWVfcm9vdCA9ICIiCiAgICBub19zaGltID0gZmFsc2UKICAgIHNoaW1fZGVidWcgPSBmYWxzZQogIFtwbHVnaW5zLnNjaGVkdWxlcl0KICAgIHBhdXNlX3RocmVzaG9sZCA9IDAuMDIKICAgIGRlbGV0aW9uX3RocmVzaG9sZCA9IDAKICAgIG11dGF0aW9uX3RocmVzaG9sZCA9IDEwMAogICAgc2NoZWR1bGVfZGVsYXkgPSAiMHMiCiAgICBzdGFydHVwX2RlbGF5ID0gIjEwMG1zIgo=" | base64 -d | sudo tee /etc/containerd/config.toml"
	I0507 22:41:00.737455  672811 ssh_runner.go:149] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0507 22:41:00.744876  672811 crio.go:128] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0507 22:41:00.744933  672811 ssh_runner.go:149] Run: sudo modprobe br_netfilter
	I0507 22:41:00.753834  672811 ssh_runner.go:149] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0507 22:41:00.761420  672811 ssh_runner.go:149] Run: sudo systemctl daemon-reload
	I0507 22:41:00.827226  672811 ssh_runner.go:149] Run: sudo systemctl restart containerd
	I0507 22:41:00.892592  672811 start.go:368] Will wait 60s for socket path /run/containerd/containerd.sock
	I0507 22:41:00.892666  672811 ssh_runner.go:149] Run: stat /run/containerd/containerd.sock
	I0507 22:41:00.896809  672811 start.go:393] Will wait 60s for crictl version
	I0507 22:41:00.896869  672811 ssh_runner.go:149] Run: sudo crictl version
	I0507 22:41:00.922312  672811 retry.go:31] will retry after 11.04660288s: Temporary Error: sudo crictl version: Process exited with status 1
	stdout:
	
	stderr:
	time="2021-05-07T22:41:00Z" level=fatal msg="getting the runtime version: rpc error: code = Unknown desc = server is not initialized yet"
	I0507 22:41:11.971610  672811 ssh_runner.go:149] Run: sudo crictl version
	I0507 22:41:12.042782  672811 start.go:402] Version:  0.1.0
	RuntimeName:  containerd
	RuntimeVersion:  1.4.4
	RuntimeApiVersion:  v1alpha2
	I0507 22:41:12.042850  672811 ssh_runner.go:149] Run: containerd --version
	I0507 22:41:12.066863  672811 out.go:170] * Preparing Kubernetes v1.20.2 on containerd 1.4.4 ...
	I0507 22:41:12.066969  672811 cli_runner.go:115] Run: docker network inspect kubenet-20210507224052-391940 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0507 22:41:12.105280  672811 ssh_runner.go:149] Run: grep 192.168.58.1	host.minikube.internal$ /etc/hosts
	I0507 22:41:12.108647  672811 ssh_runner.go:149] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.58.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0507 22:41:12.117548  672811 localpath.go:92] copying /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/client.crt -> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/profiles/kubenet-20210507224052-391940/client.crt
	I0507 22:41:12.117660  672811 localpath.go:117] copying /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/client.key -> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/profiles/kubenet-20210507224052-391940/client.key
	I0507 22:41:12.117779  672811 preload.go:98] Checking if preload exists for k8s version v1.20.2 and runtime containerd
	I0507 22:41:12.117805  672811 preload.go:106] Found local preload: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v10-v1.20.2-containerd-overlay2-amd64.tar.lz4
	I0507 22:41:12.117839  672811 ssh_runner.go:149] Run: sudo crictl images --output json
	I0507 22:41:12.139675  672811 containerd.go:571] all images are preloaded for containerd runtime.
	I0507 22:41:12.139694  672811 containerd.go:481] Images already preloaded, skipping extraction
	I0507 22:41:12.139737  672811 ssh_runner.go:149] Run: sudo crictl images --output json
	I0507 22:41:12.160780  672811 containerd.go:571] all images are preloaded for containerd runtime.
	I0507 22:41:12.160799  672811 cache_images.go:74] Images are preloaded, skipping loading
	I0507 22:41:12.160836  672811 ssh_runner.go:149] Run: sudo crictl info
	I0507 22:41:12.181806  672811 cni.go:89] network plugin configured as "kubenet", returning disabled
	I0507 22:41:12.181827  672811 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0507 22:41:12.181838  672811 kubeadm.go:153] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.58.2 APIServerPort:8443 KubernetesVersion:v1.20.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:kubenet-20210507224052-391940 NodeName:kubenet-20210507224052-391940 DNSDomain:cluster.local CRISocket:/run/containerd/containerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.58.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NoTaintMaster:true NodeIP:192.168.58.2 CgroupDriver:cgroupfs ClientCAFil
e:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[]}
	I0507 22:41:12.181948  672811 kubeadm.go:157] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.58.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /run/containerd/containerd.sock
	  name: "kubenet-20210507224052-391940"
	  kubeletExtraArgs:
	    node-ip: 192.168.58.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.58.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	
	I0507 22:41:12.182024  672811 kubeadm.go:901] kubelet [Unit]
	Wants=containerd.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///run/containerd/containerd.sock --hostname-override=kubenet-20210507224052-391940 --image-service-endpoint=unix:///run/containerd/containerd.sock --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=kubenet --node-ip=192.168.58.2 --pod-cidr=10.244.0.0/16 --runtime-request-timeout=15m
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.2 ClusterName:kubenet-20210507224052-391940 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:kubenet FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0507 22:41:12.182065  672811 ssh_runner.go:149] Run: sudo ls /var/lib/minikube/binaries/v1.20.2
	I0507 22:41:12.190005  672811 binaries.go:44] Found k8s binaries, skipping transfer
	I0507 22:41:12.190053  672811 ssh_runner.go:149] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0507 22:41:12.196524  672811 ssh_runner.go:316] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (572 bytes)
	I0507 22:41:12.208112  672811 ssh_runner.go:316] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0507 22:41:12.219787  672811 ssh_runner.go:316] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (1868 bytes)
	I0507 22:41:12.234762  672811 ssh_runner.go:149] Run: grep 192.168.58.2	control-plane.minikube.internal$ /etc/hosts
	I0507 22:41:12.238162  672811 ssh_runner.go:149] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.58.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0507 22:41:12.247659  672811 certs.go:52] Setting up /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/profiles/kubenet-20210507224052-391940 for IP: 192.168.58.2
	I0507 22:41:12.247732  672811 certs.go:171] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/ca.key
	I0507 22:41:12.247761  672811 certs.go:171] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/proxy-client-ca.key
	I0507 22:41:12.247864  672811 certs.go:282] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/profiles/kubenet-20210507224052-391940/client.key
	I0507 22:41:12.247917  672811 certs.go:286] generating minikube signed cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/profiles/kubenet-20210507224052-391940/apiserver.key.cee25041
	I0507 22:41:12.247934  672811 crypto.go:69] Generating cert /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/profiles/kubenet-20210507224052-391940/apiserver.crt.cee25041 with IP's: [192.168.58.2 10.96.0.1 127.0.0.1 10.0.0.1]
	I0507 22:41:12.324253  672811 crypto.go:157] Writing cert to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/profiles/kubenet-20210507224052-391940/apiserver.crt.cee25041 ...
	I0507 22:41:12.324281  672811 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/profiles/kubenet-20210507224052-391940/apiserver.crt.cee25041: {Name:mk17a9fadc289bdd993cd89cf73f7e42a11db951 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0507 22:41:12.324441  672811 crypto.go:165] Writing key to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/profiles/kubenet-20210507224052-391940/apiserver.key.cee25041 ...
	I0507 22:41:12.324457  672811 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/profiles/kubenet-20210507224052-391940/apiserver.key.cee25041: {Name:mk4f1b00ef492dfe1e4e53295535dd818e4b8776 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0507 22:41:12.324556  672811 certs.go:297] copying /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/profiles/kubenet-20210507224052-391940/apiserver.crt.cee25041 -> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/profiles/kubenet-20210507224052-391940/apiserver.crt
	I0507 22:41:12.324624  672811 certs.go:301] copying /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/profiles/kubenet-20210507224052-391940/apiserver.key.cee25041 -> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/profiles/kubenet-20210507224052-391940/apiserver.key
	I0507 22:41:12.324690  672811 certs.go:286] generating aggregator signed cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/profiles/kubenet-20210507224052-391940/proxy-client.key
	I0507 22:41:12.324704  672811 crypto.go:69] Generating cert /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/profiles/kubenet-20210507224052-391940/proxy-client.crt with IP's: []
	I0507 22:41:12.462717  672811 crypto.go:157] Writing cert to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/profiles/kubenet-20210507224052-391940/proxy-client.crt ...
	I0507 22:41:12.462741  672811 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/profiles/kubenet-20210507224052-391940/proxy-client.crt: {Name:mk3b377543768468ecb5ae6c2ac7692fea50fd9a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0507 22:41:12.462892  672811 crypto.go:165] Writing key to /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/profiles/kubenet-20210507224052-391940/proxy-client.key ...
	I0507 22:41:12.462906  672811 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/profiles/kubenet-20210507224052-391940/proxy-client.key: {Name:mkfe92c524b556c20012d8a91c085ac4bc69ff7a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0507 22:41:12.463104  672811 certs.go:361] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/certs/391940.pem (1338 bytes)
	W0507 22:41:12.463147  672811 certs.go:357] ignoring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/certs/391940_empty.pem, impossibly tiny 0 bytes
	I0507 22:41:12.463164  672811 certs.go:361] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/certs/ca-key.pem (1679 bytes)
	I0507 22:41:12.463201  672811 certs.go:361] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/certs/ca.pem (1078 bytes)
	I0507 22:41:12.463240  672811 certs.go:361] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/certs/cert.pem (1123 bytes)
	I0507 22:41:12.463276  672811 certs.go:361] found cert: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/certs/home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/certs/key.pem (1675 bytes)
	I0507 22:41:12.464251  672811 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/profiles/kubenet-20210507224052-391940/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0507 22:41:12.481245  672811 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/profiles/kubenet-20210507224052-391940/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0507 22:41:12.549535  672811 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/profiles/kubenet-20210507224052-391940/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0507 22:41:12.567323  672811 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/profiles/kubenet-20210507224052-391940/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0507 22:41:12.586572  672811 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0507 22:41:12.605164  672811 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0507 22:41:12.622859  672811 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0507 22:41:12.639720  672811 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0507 22:41:12.659044  672811 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/certs/391940.pem --> /usr/share/ca-certificates/391940.pem (1338 bytes)
	I0507 22:41:12.677161  672811 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0507 22:41:12.693007  672811 ssh_runner.go:316] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0507 22:41:12.704857  672811 ssh_runner.go:149] Run: openssl version
	I0507 22:41:12.709921  672811 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0507 22:41:12.717584  672811 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0507 22:41:12.720534  672811 certs.go:402] hashing: -rw-r--r-- 1 root root 1111 May  7 21:50 /usr/share/ca-certificates/minikubeCA.pem
	I0507 22:41:12.720581  672811 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0507 22:41:12.725167  672811 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0507 22:41:12.731804  672811 ssh_runner.go:149] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/391940.pem && ln -fs /usr/share/ca-certificates/391940.pem /etc/ssl/certs/391940.pem"
	I0507 22:41:12.738661  672811 ssh_runner.go:149] Run: ls -la /usr/share/ca-certificates/391940.pem
	I0507 22:41:12.741622  672811 certs.go:402] hashing: -rw-r--r-- 1 root root 1338 May  7 21:57 /usr/share/ca-certificates/391940.pem
	I0507 22:41:12.741658  672811 ssh_runner.go:149] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/391940.pem
	I0507 22:41:12.746205  672811 ssh_runner.go:149] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/391940.pem /etc/ssl/certs/51391683.0"
	I0507 22:41:12.752891  672811 kubeadm.go:381] StartCluster: {Name:kubenet-20210507224052-391940 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.22@sha256:7cc3a3cb6e51c628d8ede157ad9e1f797e8d22a1b3cedc12d3f1999cb52f962e Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.2 ClusterName:kubenet-20210507224052-391940 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:
[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:kubenet FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.58.2 Port:8443 KubernetesVersion:v1.20.2 ControlPlane:true Worker:true}] Addons:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:5m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false}
	I0507 22:41:12.752980  672811 cri.go:41] listing CRI containers in root /run/containerd/runc/k8s.io: {State:paused Name: Namespaces:[kube-system]}
	I0507 22:41:12.753082  672811 ssh_runner.go:149] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0507 22:41:12.775624  672811 cri.go:76] found id: ""
	I0507 22:41:12.775678  672811 ssh_runner.go:149] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0507 22:41:12.781880  672811 ssh_runner.go:149] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0507 22:41:12.788117  672811 kubeadm.go:220] ignoring SystemVerification for kubeadm because of docker driver
	I0507 22:41:12.788153  672811 ssh_runner.go:149] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0507 22:41:12.794718  672811 kubeadm.go:151] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0507 22:41:12.794764  672811 ssh_runner.go:240] Start: /bin/bash -c "sudo env PATH=/var/lib/minikube/binaries/v1.20.2:$PATH kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	W0507 22:41:29.502582  672811 out.go:424] no arguments passed for "  - Generating certificates and keys ..." - returning raw string
	W0507 22:41:29.502611  672811 out.go:424] no arguments passed for "  - Generating certificates and keys ..." - returning raw string
	I0507 22:41:29.504051  672811 out.go:197]   - Generating certificates and keys ...
	W0507 22:41:29.505275  672811 out.go:424] no arguments passed for "  - Booting up control plane ..." - returning raw string
	W0507 22:41:29.505298  672811 out.go:424] no arguments passed for "  - Booting up control plane ..." - returning raw string
	I0507 22:41:29.506842  672811 out.go:197]   - Booting up control plane ...
	W0507 22:41:29.507828  672811 out.go:424] no arguments passed for "  - Configuring RBAC rules ..." - returning raw string
	W0507 22:41:29.507851  672811 out.go:424] no arguments passed for "  - Configuring RBAC rules ..." - returning raw string
	I0507 22:41:29.509381  672811 out.go:197]   - Configuring RBAC rules ...
	I0507 22:41:29.511102  672811 cni.go:89] network plugin configured as "kubenet", returning disabled
	I0507 22:41:29.511144  672811 ssh_runner.go:149] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0507 22:41:29.511202  672811 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.20.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0507 22:41:29.511202  672811 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.20.2/kubectl label nodes minikube.k8s.io/version=v1.20.0 minikube.k8s.io/commit=c31bd57f93d45726e4bd30607374f8c720e70e95 minikube.k8s.io/name=kubenet-20210507224052-391940 minikube.k8s.io/updated_at=2021_05_07T22_41_29_0700 --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0507 22:41:36.452032  672811 ssh_runner.go:189] Completed: sudo /var/lib/minikube/binaries/v1.20.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig: (6.940764477s)
	I0507 22:41:36.452084  672811 ssh_runner.go:189] Completed: sudo /var/lib/minikube/binaries/v1.20.2/kubectl label nodes minikube.k8s.io/version=v1.20.0 minikube.k8s.io/commit=c31bd57f93d45726e4bd30607374f8c720e70e95 minikube.k8s.io/name=kubenet-20210507224052-391940 minikube.k8s.io/updated_at=2021_05_07T22_41_29_0700 --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig: (6.940777019s)
	I0507 22:41:36.452120  672811 ssh_runner.go:189] Completed: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj": (6.9409632s)
	I0507 22:41:36.452130  672811 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.20.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0507 22:41:36.452135  672811 ops.go:34] apiserver oom_adj: -16
	I0507 22:41:37.133448  672811 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.20.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0507 22:41:37.634119  672811 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.20.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0507 22:41:38.134120  672811 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.20.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0507 22:41:38.633786  672811 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.20.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0507 22:41:39.133311  672811 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.20.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0507 22:41:39.633524  672811 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.20.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0507 22:41:40.134249  672811 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.20.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0507 22:41:40.633580  672811 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.20.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0507 22:41:41.133642  672811 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.20.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0507 22:41:41.633685  672811 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.20.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0507 22:41:42.133984  672811 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.20.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0507 22:41:42.633334  672811 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.20.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0507 22:41:43.133263  672811 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.20.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0507 22:41:43.634078  672811 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.20.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0507 22:41:44.133696  672811 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.20.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0507 22:41:44.633466  672811 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.20.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0507 22:41:45.133959  672811 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.20.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0507 22:41:45.633643  672811 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.20.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0507 22:41:46.133797  672811 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.20.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0507 22:41:46.634042  672811 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.20.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0507 22:41:47.133888  672811 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.20.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0507 22:41:47.634155  672811 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.20.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0507 22:41:48.133838  672811 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.20.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0507 22:41:48.633584  672811 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.20.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0507 22:41:49.134019  672811 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.20.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0507 22:41:49.633305  672811 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.20.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0507 22:41:50.133859  672811 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.20.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0507 22:41:50.634269  672811 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.20.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0507 22:41:51.133941  672811 ssh_runner.go:149] Run: sudo /var/lib/minikube/binaries/v1.20.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0507 22:41:51.198474  672811 kubeadm.go:977] duration metric: took 21.687320394s to wait for elevateKubeSystemPrivileges.
	I0507 22:41:51.198504  672811 kubeadm.go:383] StartCluster complete in 38.445622759s
	I0507 22:41:51.198526  672811 settings.go:142] acquiring lock: {Name:mkbc12d45ea1a96167acb2e3885011008220fc1e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0507 22:41:51.198634  672811 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/kubeconfig
	I0507 22:41:51.201538  672811 lock.go:36] WriteFile acquiring /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/kubeconfig: {Name:mk53c460e0a047a0806c95f27e36717b9bf9f789 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0507 22:41:51.718321  672811 kapi.go:244] deployment "coredns" in namespace "kube-system" and context "kubenet-20210507224052-391940" rescaled to 1
	I0507 22:41:51.718369  672811 start.go:201] Will wait 5m0s for node &{Name: IP:192.168.58.2 Port:8443 KubernetesVersion:v1.20.2 ControlPlane:true Worker:true}
	W0507 22:41:51.718401  672811 out.go:424] no arguments passed for "* Verifying Kubernetes components...\n" - returning raw string
	W0507 22:41:51.718425  672811 out.go:424] no arguments passed for "* Verifying Kubernetes components...\n" - returning raw string
	I0507 22:41:51.720457  672811 out.go:170] * Verifying Kubernetes components...
	I0507 22:41:51.720524  672811 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service kubelet
	I0507 22:41:51.718471  672811 addons.go:328] enableAddons start: toEnable=map[], additional=[]
	I0507 22:41:51.720595  672811 addons.go:55] Setting storage-provisioner=true in profile "kubenet-20210507224052-391940"
	I0507 22:41:51.718753  672811 cache.go:108] acquiring lock: {Name:mk66f3ed174a0fda2e3a4fd9a235ceef9553bc77 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0507 22:41:51.720621  672811 addons.go:55] Setting default-storageclass=true in profile "kubenet-20210507224052-391940"
	I0507 22:41:51.720638  672811 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "kubenet-20210507224052-391940"
	I0507 22:41:51.720676  672811 addons.go:131] Setting addon storage-provisioner=true in "kubenet-20210507224052-391940"
	W0507 22:41:51.720694  672811 addons.go:140] addon storage-provisioner should already be in state true
	I0507 22:41:51.720700  672811 cache.go:116] /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/cache/images/minikube-local-cache-test_functional-20210507215728-391940 exists
	I0507 22:41:51.720716  672811 host.go:66] Checking if "kubenet-20210507224052-391940" exists ...
	I0507 22:41:51.720721  672811 cache.go:97] cache image "minikube-local-cache-test:functional-20210507215728-391940" -> "/home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/cache/images/minikube-local-cache-test_functional-20210507215728-391940" took 1.979419ms
	I0507 22:41:51.720737  672811 cache.go:81] save to tar file minikube-local-cache-test:functional-20210507215728-391940 -> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/cache/images/minikube-local-cache-test_functional-20210507215728-391940 succeeded
	I0507 22:41:51.720751  672811 cache.go:88] Successfully saved all images to host disk.
	I0507 22:41:51.721038  672811 cli_runner.go:115] Run: docker container inspect kubenet-20210507224052-391940 --format={{.State.Status}}
	I0507 22:41:51.721675  672811 cli_runner.go:115] Run: docker container inspect kubenet-20210507224052-391940 --format={{.State.Status}}
	I0507 22:41:51.721703  672811 cli_runner.go:115] Run: docker container inspect kubenet-20210507224052-391940 --format={{.State.Status}}
	I0507 22:41:51.740835  672811 node_ready.go:35] waiting up to 5m0s for node "kubenet-20210507224052-391940" to be "Ready" ...
	I0507 22:41:51.745077  672811 node_ready.go:49] node "kubenet-20210507224052-391940" has status "Ready":"True"
	I0507 22:41:51.745099  672811 node_ready.go:38] duration metric: took 4.233416ms waiting for node "kubenet-20210507224052-391940" to be "Ready" ...
	I0507 22:41:51.745110  672811 pod_ready.go:35] extra waiting up to 5m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0507 22:41:51.756875  672811 pod_ready.go:78] waiting up to 5m0s for pod "coredns-74ff55c5b-g7c7z" in "kube-system" namespace to be "Ready" ...
	I0507 22:41:51.783594  672811 addons.go:131] Setting addon default-storageclass=true in "kubenet-20210507224052-391940"
	W0507 22:41:51.783619  672811 addons.go:140] addon default-storageclass should already be in state true
	I0507 22:41:51.783637  672811 host.go:66] Checking if "kubenet-20210507224052-391940" exists ...
	I0507 22:41:51.784146  672811 cli_runner.go:115] Run: docker container inspect kubenet-20210507224052-391940 --format={{.State.Status}}
	I0507 22:41:51.788959  672811 ssh_runner.go:149] Run: sudo crictl images --output json
	I0507 22:41:51.789007  672811 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubenet-20210507224052-391940
	I0507 22:41:51.792078  672811 out.go:170]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0507 22:41:51.792203  672811 addons.go:261] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0507 22:41:51.792220  672811 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0507 22:41:51.792278  672811 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubenet-20210507224052-391940
	I0507 22:41:51.832922  672811 addons.go:261] installing /etc/kubernetes/addons/storageclass.yaml
	I0507 22:41:51.832950  672811 ssh_runner.go:316] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0507 22:41:51.833006  672811 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" kubenet-20210507224052-391940
	I0507 22:41:51.843625  672811 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33326 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/machines/kubenet-20210507224052-391940/id_rsa Username:docker}
	I0507 22:41:51.848427  672811 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33326 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/machines/kubenet-20210507224052-391940/id_rsa Username:docker}
	I0507 22:41:51.881629  672811 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33326 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/machines/kubenet-20210507224052-391940/id_rsa Username:docker}
	I0507 22:41:51.945739  672811 ssh_runner.go:149] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0507 22:41:51.956581  672811 containerd.go:567] couldn't find preloaded image for "docker.io/minikube-local-cache-test:functional-20210507215728-391940". assuming images are not preloaded.
	I0507 22:41:51.956604  672811 cache_images.go:78] LoadImages start: [minikube-local-cache-test:functional-20210507215728-391940]
	I0507 22:41:51.956650  672811 image.go:320] retrieving image: minikube-local-cache-test:functional-20210507215728-391940
	I0507 22:41:51.956698  672811 image.go:326] checking repository: index.docker.io/library/minikube-local-cache-test
	I0507 22:41:51.972691  672811 ssh_runner.go:149] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	W0507 22:41:52.183545  672811 image.go:333] remote: HEAD https://index.docker.io/v2/library/minikube-local-cache-test/manifests/functional-20210507215728-391940: unexpected status code 401 Unauthorized (HEAD responses have no body, use GET for details)
	I0507 22:41:52.183604  672811 image.go:334] short name: minikube-local-cache-test:functional-20210507215728-391940
	I0507 22:41:52.184655  672811 image.go:362] daemon lookup for minikube-local-cache-test:functional-20210507215728-391940: Error response from daemon: reference does not exist
	W0507 22:41:52.330654  672811 image.go:372] authn lookup for minikube-local-cache-test:functional-20210507215728-391940 (trying anon): GET https://index.docker.io/v2/library/minikube-local-cache-test/manifests/functional-20210507215728-391940: UNAUTHORIZED: authentication required; [map[Action:pull Class: Name:library/minikube-local-cache-test Type:repository]]
	I0507 22:41:52.347904  672811 out.go:170] * Enabled addons: storage-provisioner, default-storageclass
	I0507 22:41:52.347937  672811 addons.go:330] enableAddons completed in 629.490714ms
	I0507 22:41:52.481399  672811 image.go:376] remote lookup for minikube-local-cache-test:functional-20210507215728-391940: GET https://index.docker.io/v2/library/minikube-local-cache-test/manifests/functional-20210507215728-391940: UNAUTHORIZED: authentication required; [map[Action:pull Class: Name:library/minikube-local-cache-test Type:repository]]
	I0507 22:41:52.481439  672811 image.go:98] error retrieve Image minikube-local-cache-test:functional-20210507215728-391940 ref GET https://index.docker.io/v2/library/minikube-local-cache-test/manifests/functional-20210507215728-391940: UNAUTHORIZED: authentication required; [map[Action:pull Class: Name:library/minikube-local-cache-test Type:repository]] 
	I0507 22:41:52.481470  672811 cache_images.go:106] "minikube-local-cache-test:functional-20210507215728-391940" needs transfer: got empty img digest "" for minikube-local-cache-test:functional-20210507215728-391940
	I0507 22:41:52.481491  672811 cache_images.go:271] Loading image from: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/cache/images/minikube-local-cache-test_functional-20210507215728-391940
	I0507 22:41:52.481574  672811 ssh_runner.go:149] Run: stat -c "%s %y" /var/lib/minikube/images/minikube-local-cache-test_functional-20210507215728-391940
	I0507 22:41:52.485013  672811 ssh_runner.go:306] existence check for /var/lib/minikube/images/minikube-local-cache-test_functional-20210507215728-391940: stat -c "%s %y" /var/lib/minikube/images/minikube-local-cache-test_functional-20210507215728-391940: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot stat '/var/lib/minikube/images/minikube-local-cache-test_functional-20210507215728-391940': No such file or directory
	I0507 22:41:52.485041  672811 ssh_runner.go:316] scp /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/cache/images/minikube-local-cache-test_functional-20210507215728-391940 --> /var/lib/minikube/images/minikube-local-cache-test_functional-20210507215728-391940 (5120 bytes)
	I0507 22:41:52.502314  672811 containerd.go:267] Loading image: /var/lib/minikube/images/minikube-local-cache-test_functional-20210507215728-391940
	I0507 22:41:52.502378  672811 ssh_runner.go:149] Run: sudo ctr -n=k8s.io images import /var/lib/minikube/images/minikube-local-cache-test_functional-20210507215728-391940
	I0507 22:41:52.612982  672811 cache_images.go:293] Transferred and loaded /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/cache/images/minikube-local-cache-test_functional-20210507215728-391940 from cache
	I0507 22:41:52.613030  672811 cache_images.go:113] Successfully loaded all cached images
	I0507 22:41:52.613038  672811 cache_images.go:82] LoadImages completed in 656.425091ms
	I0507 22:41:52.613050  672811 cache_images.go:252] succeeded pushing to: kubenet-20210507224052-391940
	I0507 22:41:52.613059  672811 cache_images.go:253] failed pushing to: 
	I0507 22:41:53.769362  672811 pod_ready.go:102] pod "coredns-74ff55c5b-g7c7z" in "kube-system" namespace has status "Ready":"False"
	I0507 22:41:55.770429  672811 pod_ready.go:102] pod "coredns-74ff55c5b-g7c7z" in "kube-system" namespace has status "Ready":"False"
	I0507 22:41:58.269524  672811 pod_ready.go:102] pod "coredns-74ff55c5b-g7c7z" in "kube-system" namespace has status "Ready":"False"
	I0507 22:42:00.269630  672811 pod_ready.go:102] pod "coredns-74ff55c5b-g7c7z" in "kube-system" namespace has status "Ready":"False"
	I0507 22:42:02.269711  672811 pod_ready.go:102] pod "coredns-74ff55c5b-g7c7z" in "kube-system" namespace has status "Ready":"False"
	I0507 22:42:04.774753  672811 pod_ready.go:102] pod "coredns-74ff55c5b-g7c7z" in "kube-system" namespace has status "Ready":"False"
	I0507 22:42:07.270739  672811 pod_ready.go:102] pod "coredns-74ff55c5b-g7c7z" in "kube-system" namespace has status "Ready":"False"
	I0507 22:42:09.770268  672811 pod_ready.go:102] pod "coredns-74ff55c5b-g7c7z" in "kube-system" namespace has status "Ready":"False"
	I0507 22:42:12.270101  672811 pod_ready.go:102] pod "coredns-74ff55c5b-g7c7z" in "kube-system" namespace has status "Ready":"False"
	I0507 22:42:14.769932  672811 pod_ready.go:102] pod "coredns-74ff55c5b-g7c7z" in "kube-system" namespace has status "Ready":"False"
	I0507 22:42:17.269374  672811 pod_ready.go:102] pod "coredns-74ff55c5b-g7c7z" in "kube-system" namespace has status "Ready":"False"
	I0507 22:42:19.770111  672811 pod_ready.go:102] pod "coredns-74ff55c5b-g7c7z" in "kube-system" namespace has status "Ready":"False"
	I0507 22:42:22.269528  672811 pod_ready.go:102] pod "coredns-74ff55c5b-g7c7z" in "kube-system" namespace has status "Ready":"False"
	I0507 22:42:24.769876  672811 pod_ready.go:102] pod "coredns-74ff55c5b-g7c7z" in "kube-system" namespace has status "Ready":"False"
	I0507 22:42:27.269737  672811 pod_ready.go:102] pod "coredns-74ff55c5b-g7c7z" in "kube-system" namespace has status "Ready":"False"
	I0507 22:42:29.772420  672811 pod_ready.go:102] pod "coredns-74ff55c5b-g7c7z" in "kube-system" namespace has status "Ready":"False"
	I0507 22:42:32.269539  672811 pod_ready.go:102] pod "coredns-74ff55c5b-g7c7z" in "kube-system" namespace has status "Ready":"False"
	I0507 22:42:34.269995  672811 pod_ready.go:102] pod "coredns-74ff55c5b-g7c7z" in "kube-system" namespace has status "Ready":"False"
	I0507 22:42:36.769591  672811 pod_ready.go:102] pod "coredns-74ff55c5b-g7c7z" in "kube-system" namespace has status "Ready":"False"
	I0507 22:42:38.770070  672811 pod_ready.go:102] pod "coredns-74ff55c5b-g7c7z" in "kube-system" namespace has status "Ready":"False"
	I0507 22:42:40.771262  672811 pod_ready.go:102] pod "coredns-74ff55c5b-g7c7z" in "kube-system" namespace has status "Ready":"False"
	I0507 22:42:43.269652  672811 pod_ready.go:102] pod "coredns-74ff55c5b-g7c7z" in "kube-system" namespace has status "Ready":"False"
	I0507 22:42:45.769197  672811 pod_ready.go:102] pod "coredns-74ff55c5b-g7c7z" in "kube-system" namespace has status "Ready":"False"
	I0507 22:42:48.270916  672811 pod_ready.go:102] pod "coredns-74ff55c5b-g7c7z" in "kube-system" namespace has status "Ready":"False"
	I0507 22:42:50.769708  672811 pod_ready.go:102] pod "coredns-74ff55c5b-g7c7z" in "kube-system" namespace has status "Ready":"False"
	I0507 22:42:53.270043  672811 pod_ready.go:102] pod "coredns-74ff55c5b-g7c7z" in "kube-system" namespace has status "Ready":"False"
	I0507 22:42:55.769030  672811 pod_ready.go:102] pod "coredns-74ff55c5b-g7c7z" in "kube-system" namespace has status "Ready":"False"
	I0507 22:42:57.769122  672811 pod_ready.go:102] pod "coredns-74ff55c5b-g7c7z" in "kube-system" namespace has status "Ready":"False"
	I0507 22:42:59.769527  672811 pod_ready.go:102] pod "coredns-74ff55c5b-g7c7z" in "kube-system" namespace has status "Ready":"False"
	I0507 22:43:02.269666  672811 pod_ready.go:102] pod "coredns-74ff55c5b-g7c7z" in "kube-system" namespace has status "Ready":"False"
	I0507 22:43:04.769578  672811 pod_ready.go:102] pod "coredns-74ff55c5b-g7c7z" in "kube-system" namespace has status "Ready":"False"
	I0507 22:43:06.769758  672811 pod_ready.go:102] pod "coredns-74ff55c5b-g7c7z" in "kube-system" namespace has status "Ready":"False"
	I0507 22:43:08.770354  672811 pod_ready.go:102] pod "coredns-74ff55c5b-g7c7z" in "kube-system" namespace has status "Ready":"False"
	I0507 22:43:10.770498  672811 pod_ready.go:102] pod "coredns-74ff55c5b-g7c7z" in "kube-system" namespace has status "Ready":"False"
	I0507 22:43:13.271448  672811 pod_ready.go:102] pod "coredns-74ff55c5b-g7c7z" in "kube-system" namespace has status "Ready":"False"
	I0507 22:43:15.770804  672811 pod_ready.go:102] pod "coredns-74ff55c5b-g7c7z" in "kube-system" namespace has status "Ready":"False"
	I0507 22:43:18.269214  672811 pod_ready.go:102] pod "coredns-74ff55c5b-g7c7z" in "kube-system" namespace has status "Ready":"False"
	I0507 22:43:20.269718  672811 pod_ready.go:102] pod "coredns-74ff55c5b-g7c7z" in "kube-system" namespace has status "Ready":"False"
	I0507 22:43:22.769151  672811 pod_ready.go:102] pod "coredns-74ff55c5b-g7c7z" in "kube-system" namespace has status "Ready":"False"
	I0507 22:43:24.771659  672811 pod_ready.go:102] pod "coredns-74ff55c5b-g7c7z" in "kube-system" namespace has status "Ready":"False"
	I0507 22:43:27.269262  672811 pod_ready.go:102] pod "coredns-74ff55c5b-g7c7z" in "kube-system" namespace has status "Ready":"False"
	I0507 22:43:29.269802  672811 pod_ready.go:102] pod "coredns-74ff55c5b-g7c7z" in "kube-system" namespace has status "Ready":"False"
	I0507 22:43:31.769488  672811 pod_ready.go:102] pod "coredns-74ff55c5b-g7c7z" in "kube-system" namespace has status "Ready":"False"
	I0507 22:43:33.769541  672811 pod_ready.go:102] pod "coredns-74ff55c5b-g7c7z" in "kube-system" namespace has status "Ready":"False"
	I0507 22:43:36.268974  672811 pod_ready.go:102] pod "coredns-74ff55c5b-g7c7z" in "kube-system" namespace has status "Ready":"False"
	I0507 22:43:38.269261  672811 pod_ready.go:102] pod "coredns-74ff55c5b-g7c7z" in "kube-system" namespace has status "Ready":"False"
	I0507 22:43:40.270280  672811 pod_ready.go:102] pod "coredns-74ff55c5b-g7c7z" in "kube-system" namespace has status "Ready":"False"
	I0507 22:43:42.771006  672811 pod_ready.go:102] pod "coredns-74ff55c5b-g7c7z" in "kube-system" namespace has status "Ready":"False"
	I0507 22:43:45.269345  672811 pod_ready.go:102] pod "coredns-74ff55c5b-g7c7z" in "kube-system" namespace has status "Ready":"False"
	I0507 22:43:47.768594  672811 pod_ready.go:102] pod "coredns-74ff55c5b-g7c7z" in "kube-system" namespace has status "Ready":"False"
	I0507 22:43:49.769670  672811 pod_ready.go:102] pod "coredns-74ff55c5b-g7c7z" in "kube-system" namespace has status "Ready":"False"
	I0507 22:43:52.269433  672811 pod_ready.go:102] pod "coredns-74ff55c5b-g7c7z" in "kube-system" namespace has status "Ready":"False"
	I0507 22:43:54.769190  672811 pod_ready.go:102] pod "coredns-74ff55c5b-g7c7z" in "kube-system" namespace has status "Ready":"False"
	I0507 22:43:56.769657  672811 pod_ready.go:102] pod "coredns-74ff55c5b-g7c7z" in "kube-system" namespace has status "Ready":"False"
	I0507 22:43:59.269644  672811 pod_ready.go:102] pod "coredns-74ff55c5b-g7c7z" in "kube-system" namespace has status "Ready":"False"
	I0507 22:44:01.269772  672811 pod_ready.go:102] pod "coredns-74ff55c5b-g7c7z" in "kube-system" namespace has status "Ready":"False"
	I0507 22:44:03.769233  672811 pod_ready.go:102] pod "coredns-74ff55c5b-g7c7z" in "kube-system" namespace has status "Ready":"False"
	I0507 22:44:05.769576  672811 pod_ready.go:102] pod "coredns-74ff55c5b-g7c7z" in "kube-system" namespace has status "Ready":"False"
	I0507 22:44:08.269493  672811 pod_ready.go:102] pod "coredns-74ff55c5b-g7c7z" in "kube-system" namespace has status "Ready":"False"
	I0507 22:44:10.769584  672811 pod_ready.go:102] pod "coredns-74ff55c5b-g7c7z" in "kube-system" namespace has status "Ready":"False"
	I0507 22:44:12.770143  672811 pod_ready.go:102] pod "coredns-74ff55c5b-g7c7z" in "kube-system" namespace has status "Ready":"False"
	I0507 22:44:15.269008  672811 pod_ready.go:102] pod "coredns-74ff55c5b-g7c7z" in "kube-system" namespace has status "Ready":"False"
	I0507 22:44:17.269047  672811 pod_ready.go:102] pod "coredns-74ff55c5b-g7c7z" in "kube-system" namespace has status "Ready":"False"
	I0507 22:44:19.270021  672811 pod_ready.go:102] pod "coredns-74ff55c5b-g7c7z" in "kube-system" namespace has status "Ready":"False"
	I0507 22:44:21.270385  672811 pod_ready.go:102] pod "coredns-74ff55c5b-g7c7z" in "kube-system" namespace has status "Ready":"False"
	I0507 22:44:23.768995  672811 pod_ready.go:102] pod "coredns-74ff55c5b-g7c7z" in "kube-system" namespace has status "Ready":"False"
	I0507 22:44:25.770177  672811 pod_ready.go:102] pod "coredns-74ff55c5b-g7c7z" in "kube-system" namespace has status "Ready":"False"
	I0507 22:44:28.268810  672811 pod_ready.go:102] pod "coredns-74ff55c5b-g7c7z" in "kube-system" namespace has status "Ready":"False"
	I0507 22:44:30.269545  672811 pod_ready.go:102] pod "coredns-74ff55c5b-g7c7z" in "kube-system" namespace has status "Ready":"False"
	I0507 22:44:32.769848  672811 pod_ready.go:102] pod "coredns-74ff55c5b-g7c7z" in "kube-system" namespace has status "Ready":"False"
	I0507 22:44:35.269721  672811 pod_ready.go:102] pod "coredns-74ff55c5b-g7c7z" in "kube-system" namespace has status "Ready":"False"
	I0507 22:44:37.768834  672811 pod_ready.go:102] pod "coredns-74ff55c5b-g7c7z" in "kube-system" namespace has status "Ready":"False"
	I0507 22:44:39.769004  672811 pod_ready.go:102] pod "coredns-74ff55c5b-g7c7z" in "kube-system" namespace has status "Ready":"False"
	I0507 22:44:41.769947  672811 pod_ready.go:102] pod "coredns-74ff55c5b-g7c7z" in "kube-system" namespace has status "Ready":"False"
	I0507 22:44:44.269742  672811 pod_ready.go:102] pod "coredns-74ff55c5b-g7c7z" in "kube-system" namespace has status "Ready":"False"
	I0507 22:44:46.769632  672811 pod_ready.go:102] pod "coredns-74ff55c5b-g7c7z" in "kube-system" namespace has status "Ready":"False"
	I0507 22:44:49.269169  672811 pod_ready.go:102] pod "coredns-74ff55c5b-g7c7z" in "kube-system" namespace has status "Ready":"False"
	I0507 22:44:51.270949  672811 pod_ready.go:102] pod "coredns-74ff55c5b-g7c7z" in "kube-system" namespace has status "Ready":"False"
	I0507 22:44:53.769304  672811 pod_ready.go:102] pod "coredns-74ff55c5b-g7c7z" in "kube-system" namespace has status "Ready":"False"
	I0507 22:44:55.769541  672811 pod_ready.go:102] pod "coredns-74ff55c5b-g7c7z" in "kube-system" namespace has status "Ready":"False"
	I0507 22:44:58.269162  672811 pod_ready.go:102] pod "coredns-74ff55c5b-g7c7z" in "kube-system" namespace has status "Ready":"False"
	I0507 22:45:00.269690  672811 pod_ready.go:102] pod "coredns-74ff55c5b-g7c7z" in "kube-system" namespace has status "Ready":"False"
	I0507 22:45:02.769677  672811 pod_ready.go:102] pod "coredns-74ff55c5b-g7c7z" in "kube-system" namespace has status "Ready":"False"
	I0507 22:45:04.769826  672811 pod_ready.go:102] pod "coredns-74ff55c5b-g7c7z" in "kube-system" namespace has status "Ready":"False"
	I0507 22:45:06.774522  672811 pod_ready.go:102] pod "coredns-74ff55c5b-g7c7z" in "kube-system" namespace has status "Ready":"False"
	I0507 22:45:09.269620  672811 pod_ready.go:102] pod "coredns-74ff55c5b-g7c7z" in "kube-system" namespace has status "Ready":"False"
	I0507 22:45:11.269885  672811 pod_ready.go:102] pod "coredns-74ff55c5b-g7c7z" in "kube-system" namespace has status "Ready":"False"
	I0507 22:45:13.770049  672811 pod_ready.go:102] pod "coredns-74ff55c5b-g7c7z" in "kube-system" namespace has status "Ready":"False"
	I0507 22:45:15.772664  672811 pod_ready.go:102] pod "coredns-74ff55c5b-g7c7z" in "kube-system" namespace has status "Ready":"False"
	I0507 22:45:18.269043  672811 pod_ready.go:102] pod "coredns-74ff55c5b-g7c7z" in "kube-system" namespace has status "Ready":"False"
	I0507 22:45:20.769936  672811 pod_ready.go:102] pod "coredns-74ff55c5b-g7c7z" in "kube-system" namespace has status "Ready":"False"
	I0507 22:45:22.770233  672811 pod_ready.go:102] pod "coredns-74ff55c5b-g7c7z" in "kube-system" namespace has status "Ready":"False"
	I0507 22:45:25.269440  672811 pod_ready.go:102] pod "coredns-74ff55c5b-g7c7z" in "kube-system" namespace has status "Ready":"False"
	I0507 22:45:27.288248  672811 pod_ready.go:102] pod "coredns-74ff55c5b-g7c7z" in "kube-system" namespace has status "Ready":"False"
	I0507 22:45:29.770145  672811 pod_ready.go:102] pod "coredns-74ff55c5b-g7c7z" in "kube-system" namespace has status "Ready":"False"
	I0507 22:45:32.268998  672811 pod_ready.go:102] pod "coredns-74ff55c5b-g7c7z" in "kube-system" namespace has status "Ready":"False"
	I0507 22:45:34.269736  672811 pod_ready.go:102] pod "coredns-74ff55c5b-g7c7z" in "kube-system" namespace has status "Ready":"False"
	I0507 22:45:36.769998  672811 pod_ready.go:102] pod "coredns-74ff55c5b-g7c7z" in "kube-system" namespace has status "Ready":"False"
	I0507 22:45:39.269073  672811 pod_ready.go:102] pod "coredns-74ff55c5b-g7c7z" in "kube-system" namespace has status "Ready":"False"
	I0507 22:45:41.269867  672811 pod_ready.go:102] pod "coredns-74ff55c5b-g7c7z" in "kube-system" namespace has status "Ready":"False"
	I0507 22:45:43.769776  672811 pod_ready.go:102] pod "coredns-74ff55c5b-g7c7z" in "kube-system" namespace has status "Ready":"False"
	I0507 22:45:46.269226  672811 pod_ready.go:102] pod "coredns-74ff55c5b-g7c7z" in "kube-system" namespace has status "Ready":"False"
	I0507 22:45:48.769817  672811 pod_ready.go:102] pod "coredns-74ff55c5b-g7c7z" in "kube-system" namespace has status "Ready":"False"
	I0507 22:45:51.269597  672811 pod_ready.go:102] pod "coredns-74ff55c5b-g7c7z" in "kube-system" namespace has status "Ready":"False"
	I0507 22:45:51.773488  672811 pod_ready.go:81] duration metric: took 4m0.016579269s waiting for pod "coredns-74ff55c5b-g7c7z" in "kube-system" namespace to be "Ready" ...
	E0507 22:45:51.773523  672811 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting for the condition
	I0507 22:45:51.773536  672811 pod_ready.go:78] waiting up to 5m0s for pod "etcd-kubenet-20210507224052-391940" in "kube-system" namespace to be "Ready" ...
	I0507 22:45:51.777341  672811 pod_ready.go:92] pod "etcd-kubenet-20210507224052-391940" in "kube-system" namespace has status "Ready":"True"
	I0507 22:45:51.777357  672811 pod_ready.go:81] duration metric: took 3.813085ms waiting for pod "etcd-kubenet-20210507224052-391940" in "kube-system" namespace to be "Ready" ...
	I0507 22:45:51.777371  672811 pod_ready.go:78] waiting up to 5m0s for pod "kube-apiserver-kubenet-20210507224052-391940" in "kube-system" namespace to be "Ready" ...
	I0507 22:45:51.780967  672811 pod_ready.go:92] pod "kube-apiserver-kubenet-20210507224052-391940" in "kube-system" namespace has status "Ready":"True"
	I0507 22:45:51.780982  672811 pod_ready.go:81] duration metric: took 3.604125ms waiting for pod "kube-apiserver-kubenet-20210507224052-391940" in "kube-system" namespace to be "Ready" ...
	I0507 22:45:51.780991  672811 pod_ready.go:78] waiting up to 5m0s for pod "kube-controller-manager-kubenet-20210507224052-391940" in "kube-system" namespace to be "Ready" ...
	I0507 22:45:51.784544  672811 pod_ready.go:92] pod "kube-controller-manager-kubenet-20210507224052-391940" in "kube-system" namespace has status "Ready":"True"
	I0507 22:45:51.784564  672811 pod_ready.go:81] duration metric: took 3.566966ms waiting for pod "kube-controller-manager-kubenet-20210507224052-391940" in "kube-system" namespace to be "Ready" ...
	I0507 22:45:51.784576  672811 pod_ready.go:78] waiting up to 5m0s for pod "kube-proxy-52sqc" in "kube-system" namespace to be "Ready" ...
	I0507 22:45:52.168404  672811 pod_ready.go:92] pod "kube-proxy-52sqc" in "kube-system" namespace has status "Ready":"True"
	I0507 22:45:52.168426  672811 pod_ready.go:81] duration metric: took 383.841925ms waiting for pod "kube-proxy-52sqc" in "kube-system" namespace to be "Ready" ...
	I0507 22:45:52.168441  672811 pod_ready.go:78] waiting up to 5m0s for pod "kube-scheduler-kubenet-20210507224052-391940" in "kube-system" namespace to be "Ready" ...
	I0507 22:45:52.567262  672811 pod_ready.go:92] pod "kube-scheduler-kubenet-20210507224052-391940" in "kube-system" namespace has status "Ready":"True"
	I0507 22:45:52.567285  672811 pod_ready.go:81] duration metric: took 398.834268ms waiting for pod "kube-scheduler-kubenet-20210507224052-391940" in "kube-system" namespace to be "Ready" ...
	I0507 22:45:52.567296  672811 pod_ready.go:38] duration metric: took 4m0.822169579s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0507 22:45:52.567360  672811 api_server.go:50] waiting for apiserver process to appear ...
	I0507 22:45:52.567436  672811 cri.go:41] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0507 22:45:52.567610  672811 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0507 22:45:52.591474  672811 cri.go:76] found id: "b0c96757d2c1a4f1be8252084cc3b03a4be74eb1290ec0d9c23fc2f95de13f56"
	I0507 22:45:52.591498  672811 cri.go:76] found id: ""
	I0507 22:45:52.591530  672811 logs.go:270] 1 containers: [b0c96757d2c1a4f1be8252084cc3b03a4be74eb1290ec0d9c23fc2f95de13f56]
	I0507 22:45:52.591595  672811 ssh_runner.go:149] Run: which crictl
	I0507 22:45:52.594489  672811 cri.go:41] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0507 22:45:52.594543  672811 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=etcd
	I0507 22:45:52.615397  672811 cri.go:76] found id: "9c34570c0c0500d89639a655d5ff6ba46b968a6f226c408a5eca5815327bd259"
	I0507 22:45:52.615416  672811 cri.go:76] found id: ""
	I0507 22:45:52.615422  672811 logs.go:270] 1 containers: [9c34570c0c0500d89639a655d5ff6ba46b968a6f226c408a5eca5815327bd259]
	I0507 22:45:52.615459  672811 ssh_runner.go:149] Run: which crictl
	I0507 22:45:52.618174  672811 cri.go:41] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0507 22:45:52.618232  672811 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=coredns
	I0507 22:45:52.638869  672811 cri.go:76] found id: ""
	I0507 22:45:52.638889  672811 logs.go:270] 0 containers: []
	W0507 22:45:52.638895  672811 logs.go:272] No container was found matching "coredns"
	I0507 22:45:52.638901  672811 cri.go:41] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0507 22:45:52.638934  672811 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0507 22:45:52.658993  672811 cri.go:76] found id: "148633a394b3770297d1cd823e35542991010eb308c6c715f3bf041dd31827ac"
	I0507 22:45:52.659013  672811 cri.go:76] found id: ""
	I0507 22:45:52.659020  672811 logs.go:270] 1 containers: [148633a394b3770297d1cd823e35542991010eb308c6c715f3bf041dd31827ac]
	I0507 22:45:52.659065  672811 ssh_runner.go:149] Run: which crictl
	I0507 22:45:52.661726  672811 cri.go:41] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0507 22:45:52.661787  672811 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0507 22:45:52.682559  672811 cri.go:76] found id: "75176ef021882780077a5a6edfaab55d6c94cab9052bbeee9af1916070f1830c"
	I0507 22:45:52.682577  672811 cri.go:76] found id: ""
	I0507 22:45:52.682582  672811 logs.go:270] 1 containers: [75176ef021882780077a5a6edfaab55d6c94cab9052bbeee9af1916070f1830c]
	I0507 22:45:52.682614  672811 ssh_runner.go:149] Run: which crictl
	I0507 22:45:52.685304  672811 cri.go:41] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0507 22:45:52.685349  672811 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0507 22:45:52.705409  672811 cri.go:76] found id: ""
	I0507 22:45:52.705430  672811 logs.go:270] 0 containers: []
	W0507 22:45:52.705437  672811 logs.go:272] No container was found matching "kubernetes-dashboard"
	I0507 22:45:52.705444  672811 cri.go:41] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I0507 22:45:52.705490  672811 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0507 22:45:52.725521  672811 cri.go:76] found id: "b31ea1c27ce69de48869d05c6e4e1bd5bc912a66824c7fe25f92a42fbdb2b3e4"
	I0507 22:45:52.725549  672811 cri.go:76] found id: ""
	I0507 22:45:52.725557  672811 logs.go:270] 1 containers: [b31ea1c27ce69de48869d05c6e4e1bd5bc912a66824c7fe25f92a42fbdb2b3e4]
	I0507 22:45:52.725594  672811 ssh_runner.go:149] Run: which crictl
	I0507 22:45:52.728131  672811 cri.go:41] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0507 22:45:52.728184  672811 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0507 22:45:52.748039  672811 cri.go:76] found id: "fd80079cc01a822bfa536c96b31751c113068492450fcc2601b751d98d0ffeb9"
	I0507 22:45:52.748057  672811 cri.go:76] found id: ""
	I0507 22:45:52.748062  672811 logs.go:270] 1 containers: [fd80079cc01a822bfa536c96b31751c113068492450fcc2601b751d98d0ffeb9]
	I0507 22:45:52.748097  672811 ssh_runner.go:149] Run: which crictl
	I0507 22:45:52.750638  672811 logs.go:123] Gathering logs for containerd ...
	I0507 22:45:52.750654  672811 ssh_runner.go:149] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0507 22:45:52.786622  672811 logs.go:123] Gathering logs for kube-apiserver [b0c96757d2c1a4f1be8252084cc3b03a4be74eb1290ec0d9c23fc2f95de13f56] ...
	I0507 22:45:52.786647  672811 ssh_runner.go:149] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b0c96757d2c1a4f1be8252084cc3b03a4be74eb1290ec0d9c23fc2f95de13f56"
	I0507 22:45:52.824066  672811 logs.go:123] Gathering logs for etcd [9c34570c0c0500d89639a655d5ff6ba46b968a6f226c408a5eca5815327bd259] ...
	I0507 22:45:52.824090  672811 ssh_runner.go:149] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9c34570c0c0500d89639a655d5ff6ba46b968a6f226c408a5eca5815327bd259"
	I0507 22:45:52.848548  672811 logs.go:123] Gathering logs for storage-provisioner [b31ea1c27ce69de48869d05c6e4e1bd5bc912a66824c7fe25f92a42fbdb2b3e4] ...
	I0507 22:45:52.848571  672811 ssh_runner.go:149] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b31ea1c27ce69de48869d05c6e4e1bd5bc912a66824c7fe25f92a42fbdb2b3e4"
	I0507 22:45:52.869909  672811 logs.go:123] Gathering logs for kube-scheduler [148633a394b3770297d1cd823e35542991010eb308c6c715f3bf041dd31827ac] ...
	I0507 22:45:52.869930  672811 ssh_runner.go:149] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 148633a394b3770297d1cd823e35542991010eb308c6c715f3bf041dd31827ac"
	I0507 22:45:52.894365  672811 logs.go:123] Gathering logs for kube-proxy [75176ef021882780077a5a6edfaab55d6c94cab9052bbeee9af1916070f1830c] ...
	I0507 22:45:52.894389  672811 ssh_runner.go:149] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 75176ef021882780077a5a6edfaab55d6c94cab9052bbeee9af1916070f1830c"
	I0507 22:45:52.915404  672811 logs.go:123] Gathering logs for kube-controller-manager [fd80079cc01a822bfa536c96b31751c113068492450fcc2601b751d98d0ffeb9] ...
	I0507 22:45:52.915425  672811 ssh_runner.go:149] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 fd80079cc01a822bfa536c96b31751c113068492450fcc2601b751d98d0ffeb9"
	I0507 22:45:52.952429  672811 logs.go:123] Gathering logs for container status ...
	I0507 22:45:52.952458  672811 ssh_runner.go:149] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0507 22:45:52.976316  672811 logs.go:123] Gathering logs for kubelet ...
	I0507 22:45:52.976343  672811 ssh_runner.go:149] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0507 22:45:53.036653  672811 logs.go:123] Gathering logs for dmesg ...
	I0507 22:45:53.036690  672811 ssh_runner.go:149] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0507 22:45:53.057944  672811 logs.go:123] Gathering logs for describe nodes ...
	I0507 22:45:53.057967  672811 ssh_runner.go:149] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0507 22:45:55.641264  672811 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0507 22:45:55.660478  672811 api_server.go:70] duration metric: took 4m3.942074808s to wait for apiserver process to appear ...
	I0507 22:45:55.660507  672811 api_server.go:86] waiting for apiserver healthz status ...
	I0507 22:45:55.660536  672811 cri.go:41] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0507 22:45:55.660583  672811 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0507 22:45:55.681645  672811 cri.go:76] found id: "b0c96757d2c1a4f1be8252084cc3b03a4be74eb1290ec0d9c23fc2f95de13f56"
	I0507 22:45:55.681673  672811 cri.go:76] found id: ""
	I0507 22:45:55.681680  672811 logs.go:270] 1 containers: [b0c96757d2c1a4f1be8252084cc3b03a4be74eb1290ec0d9c23fc2f95de13f56]
	I0507 22:45:55.681720  672811 ssh_runner.go:149] Run: which crictl
	I0507 22:45:55.684913  672811 cri.go:41] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0507 22:45:55.684970  672811 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=etcd
	I0507 22:45:55.705493  672811 cri.go:76] found id: "9c34570c0c0500d89639a655d5ff6ba46b968a6f226c408a5eca5815327bd259"
	I0507 22:45:55.705512  672811 cri.go:76] found id: ""
	I0507 22:45:55.705520  672811 logs.go:270] 1 containers: [9c34570c0c0500d89639a655d5ff6ba46b968a6f226c408a5eca5815327bd259]
	I0507 22:45:55.705566  672811 ssh_runner.go:149] Run: which crictl
	I0507 22:45:55.708189  672811 cri.go:41] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0507 22:45:55.708243  672811 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=coredns
	I0507 22:45:55.728489  672811 cri.go:76] found id: ""
	I0507 22:45:55.728507  672811 logs.go:270] 0 containers: []
	W0507 22:45:55.728513  672811 logs.go:272] No container was found matching "coredns"
	I0507 22:45:55.728520  672811 cri.go:41] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0507 22:45:55.728577  672811 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0507 22:45:55.748870  672811 cri.go:76] found id: "148633a394b3770297d1cd823e35542991010eb308c6c715f3bf041dd31827ac"
	I0507 22:45:55.748891  672811 cri.go:76] found id: ""
	I0507 22:45:55.748897  672811 logs.go:270] 1 containers: [148633a394b3770297d1cd823e35542991010eb308c6c715f3bf041dd31827ac]
	I0507 22:45:55.748931  672811 ssh_runner.go:149] Run: which crictl
	I0507 22:45:55.751528  672811 cri.go:41] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0507 22:45:55.751588  672811 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0507 22:45:55.771423  672811 cri.go:76] found id: "75176ef021882780077a5a6edfaab55d6c94cab9052bbeee9af1916070f1830c"
	I0507 22:45:55.771447  672811 cri.go:76] found id: ""
	I0507 22:45:55.771454  672811 logs.go:270] 1 containers: [75176ef021882780077a5a6edfaab55d6c94cab9052bbeee9af1916070f1830c]
	I0507 22:45:55.771493  672811 ssh_runner.go:149] Run: which crictl
	I0507 22:45:55.774059  672811 cri.go:41] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0507 22:45:55.774100  672811 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0507 22:45:55.793936  672811 cri.go:76] found id: ""
	I0507 22:45:55.793955  672811 logs.go:270] 0 containers: []
	W0507 22:45:55.793962  672811 logs.go:272] No container was found matching "kubernetes-dashboard"
	I0507 22:45:55.793968  672811 cri.go:41] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I0507 22:45:55.794010  672811 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0507 22:45:55.814066  672811 cri.go:76] found id: "b31ea1c27ce69de48869d05c6e4e1bd5bc912a66824c7fe25f92a42fbdb2b3e4"
	I0507 22:45:55.814087  672811 cri.go:76] found id: ""
	I0507 22:45:55.814094  672811 logs.go:270] 1 containers: [b31ea1c27ce69de48869d05c6e4e1bd5bc912a66824c7fe25f92a42fbdb2b3e4]
	I0507 22:45:55.814132  672811 ssh_runner.go:149] Run: which crictl
	I0507 22:45:55.816677  672811 cri.go:41] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0507 22:45:55.816729  672811 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0507 22:45:55.836707  672811 cri.go:76] found id: "fd80079cc01a822bfa536c96b31751c113068492450fcc2601b751d98d0ffeb9"
	I0507 22:45:55.836735  672811 cri.go:76] found id: ""
	I0507 22:45:55.836743  672811 logs.go:270] 1 containers: [fd80079cc01a822bfa536c96b31751c113068492450fcc2601b751d98d0ffeb9]
	I0507 22:45:55.836785  672811 ssh_runner.go:149] Run: which crictl
	I0507 22:45:55.839333  672811 logs.go:123] Gathering logs for describe nodes ...
	I0507 22:45:55.839356  672811 ssh_runner.go:149] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0507 22:45:55.924686  672811 logs.go:123] Gathering logs for kube-apiserver [b0c96757d2c1a4f1be8252084cc3b03a4be74eb1290ec0d9c23fc2f95de13f56] ...
	I0507 22:45:55.924720  672811 ssh_runner.go:149] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b0c96757d2c1a4f1be8252084cc3b03a4be74eb1290ec0d9c23fc2f95de13f56"
	I0507 22:45:55.962145  672811 logs.go:123] Gathering logs for containerd ...
	I0507 22:45:55.962173  672811 ssh_runner.go:149] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0507 22:45:56.001877  672811 logs.go:123] Gathering logs for kubelet ...
	I0507 22:45:56.001906  672811 ssh_runner.go:149] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0507 22:45:56.066223  672811 logs.go:123] Gathering logs for dmesg ...
	I0507 22:45:56.066252  672811 ssh_runner.go:149] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0507 22:45:56.087631  672811 logs.go:123] Gathering logs for etcd [9c34570c0c0500d89639a655d5ff6ba46b968a6f226c408a5eca5815327bd259] ...
	I0507 22:45:56.087656  672811 ssh_runner.go:149] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9c34570c0c0500d89639a655d5ff6ba46b968a6f226c408a5eca5815327bd259"
	I0507 22:45:56.113575  672811 logs.go:123] Gathering logs for kube-scheduler [148633a394b3770297d1cd823e35542991010eb308c6c715f3bf041dd31827ac] ...
	I0507 22:45:56.113600  672811 ssh_runner.go:149] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 148633a394b3770297d1cd823e35542991010eb308c6c715f3bf041dd31827ac"
	I0507 22:45:56.139177  672811 logs.go:123] Gathering logs for kube-proxy [75176ef021882780077a5a6edfaab55d6c94cab9052bbeee9af1916070f1830c] ...
	I0507 22:45:56.139205  672811 ssh_runner.go:149] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 75176ef021882780077a5a6edfaab55d6c94cab9052bbeee9af1916070f1830c"
	I0507 22:45:56.160446  672811 logs.go:123] Gathering logs for storage-provisioner [b31ea1c27ce69de48869d05c6e4e1bd5bc912a66824c7fe25f92a42fbdb2b3e4] ...
	I0507 22:45:56.160467  672811 ssh_runner.go:149] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b31ea1c27ce69de48869d05c6e4e1bd5bc912a66824c7fe25f92a42fbdb2b3e4"
	I0507 22:45:56.181281  672811 logs.go:123] Gathering logs for kube-controller-manager [fd80079cc01a822bfa536c96b31751c113068492450fcc2601b751d98d0ffeb9] ...
	I0507 22:45:56.181304  672811 ssh_runner.go:149] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 fd80079cc01a822bfa536c96b31751c113068492450fcc2601b751d98d0ffeb9"
	I0507 22:45:56.215392  672811 logs.go:123] Gathering logs for container status ...
	I0507 22:45:56.215415  672811 ssh_runner.go:149] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0507 22:45:58.739121  672811 api_server.go:223] Checking apiserver healthz at https://192.168.58.2:8443/healthz ...
	I0507 22:45:58.747973  672811 api_server.go:249] https://192.168.58.2:8443/healthz returned 200:
	ok
	I0507 22:45:58.748915  672811 api_server.go:139] control plane version: v1.20.2
	I0507 22:45:58.748937  672811 api_server.go:129] duration metric: took 3.088423463s to wait for apiserver health ...
	I0507 22:45:58.748946  672811 system_pods.go:43] waiting for kube-system pods to appear ...
	I0507 22:45:58.748968  672811 cri.go:41] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-apiserver Namespaces:[]}
	I0507 22:45:58.749014  672811 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0507 22:45:58.772016  672811 cri.go:76] found id: "b0c96757d2c1a4f1be8252084cc3b03a4be74eb1290ec0d9c23fc2f95de13f56"
	I0507 22:45:58.772034  672811 cri.go:76] found id: ""
	I0507 22:45:58.772041  672811 logs.go:270] 1 containers: [b0c96757d2c1a4f1be8252084cc3b03a4be74eb1290ec0d9c23fc2f95de13f56]
	I0507 22:45:58.772081  672811 ssh_runner.go:149] Run: which crictl
	I0507 22:45:58.774963  672811 cri.go:41] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:etcd Namespaces:[]}
	I0507 22:45:58.775021  672811 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=etcd
	I0507 22:45:58.796011  672811 cri.go:76] found id: "9c34570c0c0500d89639a655d5ff6ba46b968a6f226c408a5eca5815327bd259"
	I0507 22:45:58.796030  672811 cri.go:76] found id: ""
	I0507 22:45:58.796038  672811 logs.go:270] 1 containers: [9c34570c0c0500d89639a655d5ff6ba46b968a6f226c408a5eca5815327bd259]
	I0507 22:45:58.796077  672811 ssh_runner.go:149] Run: which crictl
	I0507 22:45:58.798611  672811 cri.go:41] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:coredns Namespaces:[]}
	I0507 22:45:58.798654  672811 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=coredns
	I0507 22:45:58.819119  672811 cri.go:76] found id: ""
	I0507 22:45:58.819141  672811 logs.go:270] 0 containers: []
	W0507 22:45:58.819148  672811 logs.go:272] No container was found matching "coredns"
	I0507 22:45:58.819155  672811 cri.go:41] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-scheduler Namespaces:[]}
	I0507 22:45:58.819199  672811 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0507 22:45:58.838941  672811 cri.go:76] found id: "148633a394b3770297d1cd823e35542991010eb308c6c715f3bf041dd31827ac"
	I0507 22:45:58.838959  672811 cri.go:76] found id: ""
	I0507 22:45:58.838964  672811 logs.go:270] 1 containers: [148633a394b3770297d1cd823e35542991010eb308c6c715f3bf041dd31827ac]
	I0507 22:45:58.839011  672811 ssh_runner.go:149] Run: which crictl
	I0507 22:45:58.841577  672811 cri.go:41] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-proxy Namespaces:[]}
	I0507 22:45:58.841630  672811 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0507 22:45:58.862008  672811 cri.go:76] found id: "75176ef021882780077a5a6edfaab55d6c94cab9052bbeee9af1916070f1830c"
	I0507 22:45:58.862038  672811 cri.go:76] found id: ""
	I0507 22:45:58.862046  672811 logs.go:270] 1 containers: [75176ef021882780077a5a6edfaab55d6c94cab9052bbeee9af1916070f1830c]
	I0507 22:45:58.862086  672811 ssh_runner.go:149] Run: which crictl
	I0507 22:45:58.864678  672811 cri.go:41] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0507 22:45:58.864729  672811 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0507 22:45:58.884659  672811 cri.go:76] found id: ""
	I0507 22:45:58.884673  672811 logs.go:270] 0 containers: []
	W0507 22:45:58.884678  672811 logs.go:272] No container was found matching "kubernetes-dashboard"
	I0507 22:45:58.884685  672811 cri.go:41] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:storage-provisioner Namespaces:[]}
	I0507 22:45:58.884728  672811 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0507 22:45:58.904618  672811 cri.go:76] found id: "b31ea1c27ce69de48869d05c6e4e1bd5bc912a66824c7fe25f92a42fbdb2b3e4"
	I0507 22:45:58.904641  672811 cri.go:76] found id: ""
	I0507 22:45:58.904648  672811 logs.go:270] 1 containers: [b31ea1c27ce69de48869d05c6e4e1bd5bc912a66824c7fe25f92a42fbdb2b3e4]
	I0507 22:45:58.904679  672811 ssh_runner.go:149] Run: which crictl
	I0507 22:45:58.907292  672811 cri.go:41] listing CRI containers in root /run/containerd/runc/k8s.io: {State:all Name:kube-controller-manager Namespaces:[]}
	I0507 22:45:58.907336  672811 ssh_runner.go:149] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0507 22:45:58.927242  672811 cri.go:76] found id: "fd80079cc01a822bfa536c96b31751c113068492450fcc2601b751d98d0ffeb9"
	I0507 22:45:58.927257  672811 cri.go:76] found id: ""
	I0507 22:45:58.927262  672811 logs.go:270] 1 containers: [fd80079cc01a822bfa536c96b31751c113068492450fcc2601b751d98d0ffeb9]
	I0507 22:45:58.927292  672811 ssh_runner.go:149] Run: which crictl
	I0507 22:45:58.929833  672811 logs.go:123] Gathering logs for kube-scheduler [148633a394b3770297d1cd823e35542991010eb308c6c715f3bf041dd31827ac] ...
	I0507 22:45:58.929851  672811 ssh_runner.go:149] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 148633a394b3770297d1cd823e35542991010eb308c6c715f3bf041dd31827ac"
	I0507 22:45:58.952999  672811 logs.go:123] Gathering logs for kube-proxy [75176ef021882780077a5a6edfaab55d6c94cab9052bbeee9af1916070f1830c] ...
	I0507 22:45:58.953020  672811 ssh_runner.go:149] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 75176ef021882780077a5a6edfaab55d6c94cab9052bbeee9af1916070f1830c"
	I0507 22:45:58.974611  672811 logs.go:123] Gathering logs for container status ...
	I0507 22:45:58.974637  672811 ssh_runner.go:149] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0507 22:45:58.997315  672811 logs.go:123] Gathering logs for kubelet ...
	I0507 22:45:58.997340  672811 ssh_runner.go:149] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0507 22:45:59.057902  672811 logs.go:123] Gathering logs for dmesg ...
	I0507 22:45:59.057927  672811 ssh_runner.go:149] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0507 22:45:59.079247  672811 logs.go:123] Gathering logs for describe nodes ...
	I0507 22:45:59.079269  672811 ssh_runner.go:149] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0507 22:45:59.161719  672811 logs.go:123] Gathering logs for kube-apiserver [b0c96757d2c1a4f1be8252084cc3b03a4be74eb1290ec0d9c23fc2f95de13f56] ...
	I0507 22:45:59.161753  672811 ssh_runner.go:149] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b0c96757d2c1a4f1be8252084cc3b03a4be74eb1290ec0d9c23fc2f95de13f56"
	I0507 22:45:59.199262  672811 logs.go:123] Gathering logs for etcd [9c34570c0c0500d89639a655d5ff6ba46b968a6f226c408a5eca5815327bd259] ...
	I0507 22:45:59.199288  672811 ssh_runner.go:149] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9c34570c0c0500d89639a655d5ff6ba46b968a6f226c408a5eca5815327bd259"
	I0507 22:45:59.224956  672811 logs.go:123] Gathering logs for storage-provisioner [b31ea1c27ce69de48869d05c6e4e1bd5bc912a66824c7fe25f92a42fbdb2b3e4] ...
	I0507 22:45:59.224982  672811 ssh_runner.go:149] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 b31ea1c27ce69de48869d05c6e4e1bd5bc912a66824c7fe25f92a42fbdb2b3e4"
	I0507 22:45:59.246819  672811 logs.go:123] Gathering logs for kube-controller-manager [fd80079cc01a822bfa536c96b31751c113068492450fcc2601b751d98d0ffeb9] ...
	I0507 22:45:59.246842  672811 ssh_runner.go:149] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 fd80079cc01a822bfa536c96b31751c113068492450fcc2601b751d98d0ffeb9"
	I0507 22:45:59.283861  672811 logs.go:123] Gathering logs for containerd ...
	I0507 22:45:59.283890  672811 ssh_runner.go:149] Run: /bin/bash -c "sudo journalctl -u containerd -n 400"
	I0507 22:46:01.824624  672811 system_pods.go:59] 7 kube-system pods found
	I0507 22:46:01.824672  672811 system_pods.go:61] "coredns-74ff55c5b-g7c7z" [600662d5-0810-4cc4-9c1c-948a98a998f7] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0507 22:46:01.824678  672811 system_pods.go:61] "etcd-kubenet-20210507224052-391940" [eea168ee-4c8e-43a6-8108-52967320ef6a] Running
	I0507 22:46:01.824684  672811 system_pods.go:61] "kube-apiserver-kubenet-20210507224052-391940" [f39cdc15-ebd6-4a97-99e4-756feccc052a] Running
	I0507 22:46:01.824689  672811 system_pods.go:61] "kube-controller-manager-kubenet-20210507224052-391940" [2ae19e1a-5f7a-4618-9cd2-d47d424f72f5] Running
	I0507 22:46:01.824695  672811 system_pods.go:61] "kube-proxy-52sqc" [643a0bba-fae2-4c11-a3f8-3b60b0749613] Running
	I0507 22:46:01.824699  672811 system_pods.go:61] "kube-scheduler-kubenet-20210507224052-391940" [09fd6984-6adc-461a-833d-fb7835fa1e8c] Running
	I0507 22:46:01.824704  672811 system_pods.go:61] "storage-provisioner" [ceded4eb-c33a-4e8f-a94f-676277e21e9e] Running
	I0507 22:46:01.824709  672811 system_pods.go:74] duration metric: took 3.075758667s to wait for pod list to return data ...
	I0507 22:46:01.824722  672811 default_sa.go:34] waiting for default service account to be created ...
	I0507 22:46:01.826962  672811 default_sa.go:45] found service account: "default"
	I0507 22:46:01.826987  672811 default_sa.go:55] duration metric: took 2.259407ms for default service account to be created ...
	I0507 22:46:01.826995  672811 system_pods.go:116] waiting for k8s-apps to be running ...
	I0507 22:46:01.830985  672811 system_pods.go:86] 7 kube-system pods found
	I0507 22:46:01.831020  672811 system_pods.go:89] "coredns-74ff55c5b-g7c7z" [600662d5-0810-4cc4-9c1c-948a98a998f7] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0507 22:46:01.831030  672811 system_pods.go:89] "etcd-kubenet-20210507224052-391940" [eea168ee-4c8e-43a6-8108-52967320ef6a] Running
	I0507 22:46:01.831039  672811 system_pods.go:89] "kube-apiserver-kubenet-20210507224052-391940" [f39cdc15-ebd6-4a97-99e4-756feccc052a] Running
	I0507 22:46:01.831047  672811 system_pods.go:89] "kube-controller-manager-kubenet-20210507224052-391940" [2ae19e1a-5f7a-4618-9cd2-d47d424f72f5] Running
	I0507 22:46:01.831074  672811 system_pods.go:89] "kube-proxy-52sqc" [643a0bba-fae2-4c11-a3f8-3b60b0749613] Running
	I0507 22:46:01.831081  672811 system_pods.go:89] "kube-scheduler-kubenet-20210507224052-391940" [09fd6984-6adc-461a-833d-fb7835fa1e8c] Running
	I0507 22:46:01.831086  672811 system_pods.go:89] "storage-provisioner" [ceded4eb-c33a-4e8f-a94f-676277e21e9e] Running
	I0507 22:46:01.831099  672811 retry.go:31] will retry after 305.063636ms: missing components: kube-dns
	I0507 22:46:02.140549  672811 system_pods.go:86] 7 kube-system pods found
	I0507 22:46:02.140579  672811 system_pods.go:89] "coredns-74ff55c5b-g7c7z" [600662d5-0810-4cc4-9c1c-948a98a998f7] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0507 22:46:02.140585  672811 system_pods.go:89] "etcd-kubenet-20210507224052-391940" [eea168ee-4c8e-43a6-8108-52967320ef6a] Running
	I0507 22:46:02.140593  672811 system_pods.go:89] "kube-apiserver-kubenet-20210507224052-391940" [f39cdc15-ebd6-4a97-99e4-756feccc052a] Running
	I0507 22:46:02.140600  672811 system_pods.go:89] "kube-controller-manager-kubenet-20210507224052-391940" [2ae19e1a-5f7a-4618-9cd2-d47d424f72f5] Running
	I0507 22:46:02.140608  672811 system_pods.go:89] "kube-proxy-52sqc" [643a0bba-fae2-4c11-a3f8-3b60b0749613] Running
	I0507 22:46:02.140614  672811 system_pods.go:89] "kube-scheduler-kubenet-20210507224052-391940" [09fd6984-6adc-461a-833d-fb7835fa1e8c] Running
	I0507 22:46:02.140621  672811 system_pods.go:89] "storage-provisioner" [ceded4eb-c33a-4e8f-a94f-676277e21e9e] Running
	I0507 22:46:02.140634  672811 retry.go:31] will retry after 338.212508ms: missing components: kube-dns
	I0507 22:46:02.483304  672811 system_pods.go:86] 7 kube-system pods found
	I0507 22:46:02.483338  672811 system_pods.go:89] "coredns-74ff55c5b-g7c7z" [600662d5-0810-4cc4-9c1c-948a98a998f7] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0507 22:46:02.483345  672811 system_pods.go:89] "etcd-kubenet-20210507224052-391940" [eea168ee-4c8e-43a6-8108-52967320ef6a] Running
	I0507 22:46:02.483351  672811 system_pods.go:89] "kube-apiserver-kubenet-20210507224052-391940" [f39cdc15-ebd6-4a97-99e4-756feccc052a] Running
	I0507 22:46:02.483355  672811 system_pods.go:89] "kube-controller-manager-kubenet-20210507224052-391940" [2ae19e1a-5f7a-4618-9cd2-d47d424f72f5] Running
	I0507 22:46:02.483359  672811 system_pods.go:89] "kube-proxy-52sqc" [643a0bba-fae2-4c11-a3f8-3b60b0749613] Running
	I0507 22:46:02.483364  672811 system_pods.go:89] "kube-scheduler-kubenet-20210507224052-391940" [09fd6984-6adc-461a-833d-fb7835fa1e8c] Running
	I0507 22:46:02.483367  672811 system_pods.go:89] "storage-provisioner" [ceded4eb-c33a-4e8f-a94f-676277e21e9e] Running
	I0507 22:46:02.483378  672811 retry.go:31] will retry after 378.459802ms: missing components: kube-dns
	I0507 22:46:02.867187  672811 system_pods.go:86] 7 kube-system pods found
	I0507 22:46:02.867218  672811 system_pods.go:89] "coredns-74ff55c5b-g7c7z" [600662d5-0810-4cc4-9c1c-948a98a998f7] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0507 22:46:02.867226  672811 system_pods.go:89] "etcd-kubenet-20210507224052-391940" [eea168ee-4c8e-43a6-8108-52967320ef6a] Running
	I0507 22:46:02.867234  672811 system_pods.go:89] "kube-apiserver-kubenet-20210507224052-391940" [f39cdc15-ebd6-4a97-99e4-756feccc052a] Running
	I0507 22:46:02.867241  672811 system_pods.go:89] "kube-controller-manager-kubenet-20210507224052-391940" [2ae19e1a-5f7a-4618-9cd2-d47d424f72f5] Running
	I0507 22:46:02.867250  672811 system_pods.go:89] "kube-proxy-52sqc" [643a0bba-fae2-4c11-a3f8-3b60b0749613] Running
	I0507 22:46:02.867258  672811 system_pods.go:89] "kube-scheduler-kubenet-20210507224052-391940" [09fd6984-6adc-461a-833d-fb7835fa1e8c] Running
	I0507 22:46:02.867264  672811 system_pods.go:89] "storage-provisioner" [ceded4eb-c33a-4e8f-a94f-676277e21e9e] Running
	I0507 22:46:02.867277  672811 retry.go:31] will retry after 469.882201ms: missing components: kube-dns
	I0507 22:46:03.341758  672811 system_pods.go:86] 7 kube-system pods found
	I0507 22:46:03.341789  672811 system_pods.go:89] "coredns-74ff55c5b-g7c7z" [600662d5-0810-4cc4-9c1c-948a98a998f7] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0507 22:46:03.341795  672811 system_pods.go:89] "etcd-kubenet-20210507224052-391940" [eea168ee-4c8e-43a6-8108-52967320ef6a] Running
	I0507 22:46:03.341801  672811 system_pods.go:89] "kube-apiserver-kubenet-20210507224052-391940" [f39cdc15-ebd6-4a97-99e4-756feccc052a] Running
	I0507 22:46:03.341806  672811 system_pods.go:89] "kube-controller-manager-kubenet-20210507224052-391940" [2ae19e1a-5f7a-4618-9cd2-d47d424f72f5] Running
	I0507 22:46:03.341810  672811 system_pods.go:89] "kube-proxy-52sqc" [643a0bba-fae2-4c11-a3f8-3b60b0749613] Running
	I0507 22:46:03.341814  672811 system_pods.go:89] "kube-scheduler-kubenet-20210507224052-391940" [09fd6984-6adc-461a-833d-fb7835fa1e8c] Running
	I0507 22:46:03.341817  672811 system_pods.go:89] "storage-provisioner" [ceded4eb-c33a-4e8f-a94f-676277e21e9e] Running
	I0507 22:46:03.341828  672811 retry.go:31] will retry after 667.365439ms: missing components: kube-dns
	I0507 22:46:04.013373  672811 system_pods.go:86] 7 kube-system pods found
	I0507 22:46:04.013405  672811 system_pods.go:89] "coredns-74ff55c5b-g7c7z" [600662d5-0810-4cc4-9c1c-948a98a998f7] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0507 22:46:04.013411  672811 system_pods.go:89] "etcd-kubenet-20210507224052-391940" [eea168ee-4c8e-43a6-8108-52967320ef6a] Running
	I0507 22:46:04.013417  672811 system_pods.go:89] "kube-apiserver-kubenet-20210507224052-391940" [f39cdc15-ebd6-4a97-99e4-756feccc052a] Running
	I0507 22:46:04.013422  672811 system_pods.go:89] "kube-controller-manager-kubenet-20210507224052-391940" [2ae19e1a-5f7a-4618-9cd2-d47d424f72f5] Running
	I0507 22:46:04.013425  672811 system_pods.go:89] "kube-proxy-52sqc" [643a0bba-fae2-4c11-a3f8-3b60b0749613] Running
	I0507 22:46:04.013430  672811 system_pods.go:89] "kube-scheduler-kubenet-20210507224052-391940" [09fd6984-6adc-461a-833d-fb7835fa1e8c] Running
	I0507 22:46:04.013433  672811 system_pods.go:89] "storage-provisioner" [ceded4eb-c33a-4e8f-a94f-676277e21e9e] Running
	I0507 22:46:04.013443  672811 retry.go:31] will retry after 597.243124ms: missing components: kube-dns
	I0507 22:46:04.615326  672811 system_pods.go:86] 7 kube-system pods found
	I0507 22:46:04.615358  672811 system_pods.go:89] "coredns-74ff55c5b-g7c7z" [600662d5-0810-4cc4-9c1c-948a98a998f7] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0507 22:46:04.615366  672811 system_pods.go:89] "etcd-kubenet-20210507224052-391940" [eea168ee-4c8e-43a6-8108-52967320ef6a] Running
	I0507 22:46:04.615375  672811 system_pods.go:89] "kube-apiserver-kubenet-20210507224052-391940" [f39cdc15-ebd6-4a97-99e4-756feccc052a] Running
	I0507 22:46:04.615386  672811 system_pods.go:89] "kube-controller-manager-kubenet-20210507224052-391940" [2ae19e1a-5f7a-4618-9cd2-d47d424f72f5] Running
	I0507 22:46:04.615398  672811 system_pods.go:89] "kube-proxy-52sqc" [643a0bba-fae2-4c11-a3f8-3b60b0749613] Running
	I0507 22:46:04.615403  672811 system_pods.go:89] "kube-scheduler-kubenet-20210507224052-391940" [09fd6984-6adc-461a-833d-fb7835fa1e8c] Running
	I0507 22:46:04.615410  672811 system_pods.go:89] "storage-provisioner" [ceded4eb-c33a-4e8f-a94f-676277e21e9e] Running
	I0507 22:46:04.615422  672811 retry.go:31] will retry after 789.889932ms: missing components: kube-dns
	I0507 22:46:05.411070  672811 system_pods.go:86] 7 kube-system pods found
	I0507 22:46:05.411103  672811 system_pods.go:89] "coredns-74ff55c5b-g7c7z" [600662d5-0810-4cc4-9c1c-948a98a998f7] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0507 22:46:05.411109  672811 system_pods.go:89] "etcd-kubenet-20210507224052-391940" [eea168ee-4c8e-43a6-8108-52967320ef6a] Running
	I0507 22:46:05.411115  672811 system_pods.go:89] "kube-apiserver-kubenet-20210507224052-391940" [f39cdc15-ebd6-4a97-99e4-756feccc052a] Running
	I0507 22:46:05.411120  672811 system_pods.go:89] "kube-controller-manager-kubenet-20210507224052-391940" [2ae19e1a-5f7a-4618-9cd2-d47d424f72f5] Running
	I0507 22:46:05.411124  672811 system_pods.go:89] "kube-proxy-52sqc" [643a0bba-fae2-4c11-a3f8-3b60b0749613] Running
	I0507 22:46:05.411128  672811 system_pods.go:89] "kube-scheduler-kubenet-20210507224052-391940" [09fd6984-6adc-461a-833d-fb7835fa1e8c] Running
	I0507 22:46:05.411134  672811 system_pods.go:89] "storage-provisioner" [ceded4eb-c33a-4e8f-a94f-676277e21e9e] Running
	I0507 22:46:05.411145  672811 retry.go:31] will retry after 951.868007ms: missing components: kube-dns
	I0507 22:46:06.367954  672811 system_pods.go:86] 7 kube-system pods found
	I0507 22:46:06.367985  672811 system_pods.go:89] "coredns-74ff55c5b-g7c7z" [600662d5-0810-4cc4-9c1c-948a98a998f7] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0507 22:46:06.367994  672811 system_pods.go:89] "etcd-kubenet-20210507224052-391940" [eea168ee-4c8e-43a6-8108-52967320ef6a] Running
	I0507 22:46:06.368003  672811 system_pods.go:89] "kube-apiserver-kubenet-20210507224052-391940" [f39cdc15-ebd6-4a97-99e4-756feccc052a] Running
	I0507 22:46:06.368008  672811 system_pods.go:89] "kube-controller-manager-kubenet-20210507224052-391940" [2ae19e1a-5f7a-4618-9cd2-d47d424f72f5] Running
	I0507 22:46:06.368012  672811 system_pods.go:89] "kube-proxy-52sqc" [643a0bba-fae2-4c11-a3f8-3b60b0749613] Running
	I0507 22:46:06.368016  672811 system_pods.go:89] "kube-scheduler-kubenet-20210507224052-391940" [09fd6984-6adc-461a-833d-fb7835fa1e8c] Running
	I0507 22:46:06.368022  672811 system_pods.go:89] "storage-provisioner" [ceded4eb-c33a-4e8f-a94f-676277e21e9e] Running
	I0507 22:46:06.368033  672811 retry.go:31] will retry after 1.341783893s: missing components: kube-dns
	I0507 22:46:07.715243  672811 system_pods.go:86] 7 kube-system pods found
	I0507 22:46:07.715278  672811 system_pods.go:89] "coredns-74ff55c5b-g7c7z" [600662d5-0810-4cc4-9c1c-948a98a998f7] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0507 22:46:07.715284  672811 system_pods.go:89] "etcd-kubenet-20210507224052-391940" [eea168ee-4c8e-43a6-8108-52967320ef6a] Running
	I0507 22:46:07.715290  672811 system_pods.go:89] "kube-apiserver-kubenet-20210507224052-391940" [f39cdc15-ebd6-4a97-99e4-756feccc052a] Running
	I0507 22:46:07.715294  672811 system_pods.go:89] "kube-controller-manager-kubenet-20210507224052-391940" [2ae19e1a-5f7a-4618-9cd2-d47d424f72f5] Running
	I0507 22:46:07.715299  672811 system_pods.go:89] "kube-proxy-52sqc" [643a0bba-fae2-4c11-a3f8-3b60b0749613] Running
	I0507 22:46:07.715303  672811 system_pods.go:89] "kube-scheduler-kubenet-20210507224052-391940" [09fd6984-6adc-461a-833d-fb7835fa1e8c] Running
	I0507 22:46:07.715307  672811 system_pods.go:89] "storage-provisioner" [ceded4eb-c33a-4e8f-a94f-676277e21e9e] Running
	I0507 22:46:07.715318  672811 retry.go:31] will retry after 1.876813009s: missing components: kube-dns
	I0507 22:46:09.596846  672811 system_pods.go:86] 7 kube-system pods found
	I0507 22:46:09.596877  672811 system_pods.go:89] "coredns-74ff55c5b-g7c7z" [600662d5-0810-4cc4-9c1c-948a98a998f7] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0507 22:46:09.596883  672811 system_pods.go:89] "etcd-kubenet-20210507224052-391940" [eea168ee-4c8e-43a6-8108-52967320ef6a] Running
	I0507 22:46:09.596889  672811 system_pods.go:89] "kube-apiserver-kubenet-20210507224052-391940" [f39cdc15-ebd6-4a97-99e4-756feccc052a] Running
	I0507 22:46:09.596894  672811 system_pods.go:89] "kube-controller-manager-kubenet-20210507224052-391940" [2ae19e1a-5f7a-4618-9cd2-d47d424f72f5] Running
	I0507 22:46:09.596898  672811 system_pods.go:89] "kube-proxy-52sqc" [643a0bba-fae2-4c11-a3f8-3b60b0749613] Running
	I0507 22:46:09.596902  672811 system_pods.go:89] "kube-scheduler-kubenet-20210507224052-391940" [09fd6984-6adc-461a-833d-fb7835fa1e8c] Running
	I0507 22:46:09.596908  672811 system_pods.go:89] "storage-provisioner" [ceded4eb-c33a-4e8f-a94f-676277e21e9e] Running
	I0507 22:46:09.596919  672811 retry.go:31] will retry after 2.6934314s: missing components: kube-dns
	I0507 22:46:12.295432  672811 system_pods.go:86] 7 kube-system pods found
	I0507 22:46:12.295467  672811 system_pods.go:89] "coredns-74ff55c5b-g7c7z" [600662d5-0810-4cc4-9c1c-948a98a998f7] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0507 22:46:12.295473  672811 system_pods.go:89] "etcd-kubenet-20210507224052-391940" [eea168ee-4c8e-43a6-8108-52967320ef6a] Running
	I0507 22:46:12.295479  672811 system_pods.go:89] "kube-apiserver-kubenet-20210507224052-391940" [f39cdc15-ebd6-4a97-99e4-756feccc052a] Running
	I0507 22:46:12.295484  672811 system_pods.go:89] "kube-controller-manager-kubenet-20210507224052-391940" [2ae19e1a-5f7a-4618-9cd2-d47d424f72f5] Running
	I0507 22:46:12.295488  672811 system_pods.go:89] "kube-proxy-52sqc" [643a0bba-fae2-4c11-a3f8-3b60b0749613] Running
	I0507 22:46:12.295492  672811 system_pods.go:89] "kube-scheduler-kubenet-20210507224052-391940" [09fd6984-6adc-461a-833d-fb7835fa1e8c] Running
	I0507 22:46:12.295496  672811 system_pods.go:89] "storage-provisioner" [ceded4eb-c33a-4e8f-a94f-676277e21e9e] Running
	I0507 22:46:12.295535  672811 retry.go:31] will retry after 2.494582248s: missing components: kube-dns
	I0507 22:46:14.802279  672811 system_pods.go:86] 7 kube-system pods found
	I0507 22:46:14.802312  672811 system_pods.go:89] "coredns-74ff55c5b-g7c7z" [600662d5-0810-4cc4-9c1c-948a98a998f7] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0507 22:46:14.802319  672811 system_pods.go:89] "etcd-kubenet-20210507224052-391940" [eea168ee-4c8e-43a6-8108-52967320ef6a] Running
	I0507 22:46:14.802328  672811 system_pods.go:89] "kube-apiserver-kubenet-20210507224052-391940" [f39cdc15-ebd6-4a97-99e4-756feccc052a] Running
	I0507 22:46:14.802332  672811 system_pods.go:89] "kube-controller-manager-kubenet-20210507224052-391940" [2ae19e1a-5f7a-4618-9cd2-d47d424f72f5] Running
	I0507 22:46:14.802338  672811 system_pods.go:89] "kube-proxy-52sqc" [643a0bba-fae2-4c11-a3f8-3b60b0749613] Running
	I0507 22:46:14.802347  672811 system_pods.go:89] "kube-scheduler-kubenet-20210507224052-391940" [09fd6984-6adc-461a-833d-fb7835fa1e8c] Running
	I0507 22:46:14.802351  672811 system_pods.go:89] "storage-provisioner" [ceded4eb-c33a-4e8f-a94f-676277e21e9e] Running
	I0507 22:46:14.802365  672811 retry.go:31] will retry after 3.420895489s: missing components: kube-dns
	I0507 22:46:18.228571  672811 system_pods.go:86] 7 kube-system pods found
	I0507 22:46:18.228606  672811 system_pods.go:89] "coredns-74ff55c5b-g7c7z" [600662d5-0810-4cc4-9c1c-948a98a998f7] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0507 22:46:18.228614  672811 system_pods.go:89] "etcd-kubenet-20210507224052-391940" [eea168ee-4c8e-43a6-8108-52967320ef6a] Running
	I0507 22:46:18.228620  672811 system_pods.go:89] "kube-apiserver-kubenet-20210507224052-391940" [f39cdc15-ebd6-4a97-99e4-756feccc052a] Running
	I0507 22:46:18.228625  672811 system_pods.go:89] "kube-controller-manager-kubenet-20210507224052-391940" [2ae19e1a-5f7a-4618-9cd2-d47d424f72f5] Running
	I0507 22:46:18.228629  672811 system_pods.go:89] "kube-proxy-52sqc" [643a0bba-fae2-4c11-a3f8-3b60b0749613] Running
	I0507 22:46:18.228634  672811 system_pods.go:89] "kube-scheduler-kubenet-20210507224052-391940" [09fd6984-6adc-461a-833d-fb7835fa1e8c] Running
	I0507 22:46:18.228641  672811 system_pods.go:89] "storage-provisioner" [ceded4eb-c33a-4e8f-a94f-676277e21e9e] Running
	I0507 22:46:18.228690  672811 retry.go:31] will retry after 4.133785681s: missing components: kube-dns
	I0507 22:46:22.368039  672811 system_pods.go:86] 7 kube-system pods found
	I0507 22:46:22.368077  672811 system_pods.go:89] "coredns-74ff55c5b-g7c7z" [600662d5-0810-4cc4-9c1c-948a98a998f7] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0507 22:46:22.368083  672811 system_pods.go:89] "etcd-kubenet-20210507224052-391940" [eea168ee-4c8e-43a6-8108-52967320ef6a] Running
	I0507 22:46:22.368090  672811 system_pods.go:89] "kube-apiserver-kubenet-20210507224052-391940" [f39cdc15-ebd6-4a97-99e4-756feccc052a] Running
	I0507 22:46:22.368094  672811 system_pods.go:89] "kube-controller-manager-kubenet-20210507224052-391940" [2ae19e1a-5f7a-4618-9cd2-d47d424f72f5] Running
	I0507 22:46:22.368099  672811 system_pods.go:89] "kube-proxy-52sqc" [643a0bba-fae2-4c11-a3f8-3b60b0749613] Running
	I0507 22:46:22.368104  672811 system_pods.go:89] "kube-scheduler-kubenet-20210507224052-391940" [09fd6984-6adc-461a-833d-fb7835fa1e8c] Running
	I0507 22:46:22.368110  672811 system_pods.go:89] "storage-provisioner" [ceded4eb-c33a-4e8f-a94f-676277e21e9e] Running
	I0507 22:46:22.368123  672811 retry.go:31] will retry after 5.595921491s: missing components: kube-dns
	I0507 22:46:27.968419  672811 system_pods.go:86] 7 kube-system pods found
	I0507 22:46:27.968457  672811 system_pods.go:89] "coredns-74ff55c5b-g7c7z" [600662d5-0810-4cc4-9c1c-948a98a998f7] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0507 22:46:27.968468  672811 system_pods.go:89] "etcd-kubenet-20210507224052-391940" [eea168ee-4c8e-43a6-8108-52967320ef6a] Running
	I0507 22:46:27.968478  672811 system_pods.go:89] "kube-apiserver-kubenet-20210507224052-391940" [f39cdc15-ebd6-4a97-99e4-756feccc052a] Running
	I0507 22:46:27.968485  672811 system_pods.go:89] "kube-controller-manager-kubenet-20210507224052-391940" [2ae19e1a-5f7a-4618-9cd2-d47d424f72f5] Running
	I0507 22:46:27.968491  672811 system_pods.go:89] "kube-proxy-52sqc" [643a0bba-fae2-4c11-a3f8-3b60b0749613] Running
	I0507 22:46:27.968500  672811 system_pods.go:89] "kube-scheduler-kubenet-20210507224052-391940" [09fd6984-6adc-461a-833d-fb7835fa1e8c] Running
	I0507 22:46:27.968506  672811 system_pods.go:89] "storage-provisioner" [ceded4eb-c33a-4e8f-a94f-676277e21e9e] Running
	I0507 22:46:27.968522  672811 retry.go:31] will retry after 6.3346098s: missing components: kube-dns
	I0507 22:46:34.308467  672811 system_pods.go:86] 7 kube-system pods found
	I0507 22:46:34.308500  672811 system_pods.go:89] "coredns-74ff55c5b-g7c7z" [600662d5-0810-4cc4-9c1c-948a98a998f7] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0507 22:46:34.308506  672811 system_pods.go:89] "etcd-kubenet-20210507224052-391940" [eea168ee-4c8e-43a6-8108-52967320ef6a] Running
	I0507 22:46:34.308513  672811 system_pods.go:89] "kube-apiserver-kubenet-20210507224052-391940" [f39cdc15-ebd6-4a97-99e4-756feccc052a] Running
	I0507 22:46:34.308517  672811 system_pods.go:89] "kube-controller-manager-kubenet-20210507224052-391940" [2ae19e1a-5f7a-4618-9cd2-d47d424f72f5] Running
	I0507 22:46:34.308521  672811 system_pods.go:89] "kube-proxy-52sqc" [643a0bba-fae2-4c11-a3f8-3b60b0749613] Running
	I0507 22:46:34.308525  672811 system_pods.go:89] "kube-scheduler-kubenet-20210507224052-391940" [09fd6984-6adc-461a-833d-fb7835fa1e8c] Running
	I0507 22:46:34.308529  672811 system_pods.go:89] "storage-provisioner" [ceded4eb-c33a-4e8f-a94f-676277e21e9e] Running
	I0507 22:46:34.308550  672811 retry.go:31] will retry after 7.962971847s: missing components: kube-dns
	I0507 22:46:42.276615  672811 system_pods.go:86] 7 kube-system pods found
	I0507 22:46:42.276650  672811 system_pods.go:89] "coredns-74ff55c5b-g7c7z" [600662d5-0810-4cc4-9c1c-948a98a998f7] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0507 22:46:42.276658  672811 system_pods.go:89] "etcd-kubenet-20210507224052-391940" [eea168ee-4c8e-43a6-8108-52967320ef6a] Running
	I0507 22:46:42.276674  672811 system_pods.go:89] "kube-apiserver-kubenet-20210507224052-391940" [f39cdc15-ebd6-4a97-99e4-756feccc052a] Running
	I0507 22:46:42.276682  672811 system_pods.go:89] "kube-controller-manager-kubenet-20210507224052-391940" [2ae19e1a-5f7a-4618-9cd2-d47d424f72f5] Running
	I0507 22:46:42.276692  672811 system_pods.go:89] "kube-proxy-52sqc" [643a0bba-fae2-4c11-a3f8-3b60b0749613] Running
	I0507 22:46:42.276702  672811 system_pods.go:89] "kube-scheduler-kubenet-20210507224052-391940" [09fd6984-6adc-461a-833d-fb7835fa1e8c] Running
	I0507 22:46:42.276711  672811 system_pods.go:89] "storage-provisioner" [ceded4eb-c33a-4e8f-a94f-676277e21e9e] Running
	I0507 22:46:42.276728  672811 retry.go:31] will retry after 12.096349863s: missing components: kube-dns
	I0507 22:46:54.377899  672811 system_pods.go:86] 7 kube-system pods found
	I0507 22:46:54.377933  672811 system_pods.go:89] "coredns-74ff55c5b-g7c7z" [600662d5-0810-4cc4-9c1c-948a98a998f7] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0507 22:46:54.377939  672811 system_pods.go:89] "etcd-kubenet-20210507224052-391940" [eea168ee-4c8e-43a6-8108-52967320ef6a] Running
	I0507 22:46:54.377945  672811 system_pods.go:89] "kube-apiserver-kubenet-20210507224052-391940" [f39cdc15-ebd6-4a97-99e4-756feccc052a] Running
	I0507 22:46:54.377950  672811 system_pods.go:89] "kube-controller-manager-kubenet-20210507224052-391940" [2ae19e1a-5f7a-4618-9cd2-d47d424f72f5] Running
	I0507 22:46:54.377954  672811 system_pods.go:89] "kube-proxy-52sqc" [643a0bba-fae2-4c11-a3f8-3b60b0749613] Running
	I0507 22:46:54.377959  672811 system_pods.go:89] "kube-scheduler-kubenet-20210507224052-391940" [09fd6984-6adc-461a-833d-fb7835fa1e8c] Running
	I0507 22:46:54.377962  672811 system_pods.go:89] "storage-provisioner" [ceded4eb-c33a-4e8f-a94f-676277e21e9e] Running
	I0507 22:46:54.377976  672811 retry.go:31] will retry after 11.924857264s: missing components: kube-dns
	I0507 22:47:06.308089  672811 system_pods.go:86] 7 kube-system pods found
	I0507 22:47:06.308137  672811 system_pods.go:89] "coredns-74ff55c5b-g7c7z" [600662d5-0810-4cc4-9c1c-948a98a998f7] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0507 22:47:06.308147  672811 system_pods.go:89] "etcd-kubenet-20210507224052-391940" [eea168ee-4c8e-43a6-8108-52967320ef6a] Running
	I0507 22:47:06.308156  672811 system_pods.go:89] "kube-apiserver-kubenet-20210507224052-391940" [f39cdc15-ebd6-4a97-99e4-756feccc052a] Running
	I0507 22:47:06.308169  672811 system_pods.go:89] "kube-controller-manager-kubenet-20210507224052-391940" [2ae19e1a-5f7a-4618-9cd2-d47d424f72f5] Running
	I0507 22:47:06.308181  672811 system_pods.go:89] "kube-proxy-52sqc" [643a0bba-fae2-4c11-a3f8-3b60b0749613] Running
	I0507 22:47:06.308189  672811 system_pods.go:89] "kube-scheduler-kubenet-20210507224052-391940" [09fd6984-6adc-461a-833d-fb7835fa1e8c] Running
	I0507 22:47:06.308195  672811 system_pods.go:89] "storage-provisioner" [ceded4eb-c33a-4e8f-a94f-676277e21e9e] Running
	I0507 22:47:06.308215  672811 retry.go:31] will retry after 14.772791249s: missing components: kube-dns
	I0507 22:47:21.085968  672811 system_pods.go:86] 7 kube-system pods found
	I0507 22:47:21.086010  672811 system_pods.go:89] "coredns-74ff55c5b-g7c7z" [600662d5-0810-4cc4-9c1c-948a98a998f7] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0507 22:47:21.086021  672811 system_pods.go:89] "etcd-kubenet-20210507224052-391940" [eea168ee-4c8e-43a6-8108-52967320ef6a] Running
	I0507 22:47:21.086030  672811 system_pods.go:89] "kube-apiserver-kubenet-20210507224052-391940" [f39cdc15-ebd6-4a97-99e4-756feccc052a] Running
	I0507 22:47:21.086040  672811 system_pods.go:89] "kube-controller-manager-kubenet-20210507224052-391940" [2ae19e1a-5f7a-4618-9cd2-d47d424f72f5] Running
	I0507 22:47:21.086054  672811 system_pods.go:89] "kube-proxy-52sqc" [643a0bba-fae2-4c11-a3f8-3b60b0749613] Running
	I0507 22:47:21.086061  672811 system_pods.go:89] "kube-scheduler-kubenet-20210507224052-391940" [09fd6984-6adc-461a-833d-fb7835fa1e8c] Running
	I0507 22:47:21.086068  672811 system_pods.go:89] "storage-provisioner" [ceded4eb-c33a-4e8f-a94f-676277e21e9e] Running
	I0507 22:47:21.086093  672811 retry.go:31] will retry after 20.175608267s: missing components: kube-dns
	I0507 22:47:41.266530  672811 system_pods.go:86] 7 kube-system pods found
	I0507 22:47:41.266567  672811 system_pods.go:89] "coredns-74ff55c5b-g7c7z" [600662d5-0810-4cc4-9c1c-948a98a998f7] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0507 22:47:41.266575  672811 system_pods.go:89] "etcd-kubenet-20210507224052-391940" [eea168ee-4c8e-43a6-8108-52967320ef6a] Running
	I0507 22:47:41.266583  672811 system_pods.go:89] "kube-apiserver-kubenet-20210507224052-391940" [f39cdc15-ebd6-4a97-99e4-756feccc052a] Running
	I0507 22:47:41.266587  672811 system_pods.go:89] "kube-controller-manager-kubenet-20210507224052-391940" [2ae19e1a-5f7a-4618-9cd2-d47d424f72f5] Running
	I0507 22:47:41.266592  672811 system_pods.go:89] "kube-proxy-52sqc" [643a0bba-fae2-4c11-a3f8-3b60b0749613] Running
	I0507 22:47:41.266596  672811 system_pods.go:89] "kube-scheduler-kubenet-20210507224052-391940" [09fd6984-6adc-461a-833d-fb7835fa1e8c] Running
	I0507 22:47:41.266600  672811 system_pods.go:89] "storage-provisioner" [ceded4eb-c33a-4e8f-a94f-676277e21e9e] Running
	I0507 22:47:41.266611  672811 retry.go:31] will retry after 28.062855718s: missing components: kube-dns
	I0507 22:48:09.334307  672811 system_pods.go:86] 7 kube-system pods found
	I0507 22:48:09.334345  672811 system_pods.go:89] "coredns-74ff55c5b-g7c7z" [600662d5-0810-4cc4-9c1c-948a98a998f7] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0507 22:48:09.334354  672811 system_pods.go:89] "etcd-kubenet-20210507224052-391940" [eea168ee-4c8e-43a6-8108-52967320ef6a] Running
	I0507 22:48:09.334362  672811 system_pods.go:89] "kube-apiserver-kubenet-20210507224052-391940" [f39cdc15-ebd6-4a97-99e4-756feccc052a] Running
	I0507 22:48:09.334369  672811 system_pods.go:89] "kube-controller-manager-kubenet-20210507224052-391940" [2ae19e1a-5f7a-4618-9cd2-d47d424f72f5] Running
	I0507 22:48:09.334378  672811 system_pods.go:89] "kube-proxy-52sqc" [643a0bba-fae2-4c11-a3f8-3b60b0749613] Running
	I0507 22:48:09.334385  672811 system_pods.go:89] "kube-scheduler-kubenet-20210507224052-391940" [09fd6984-6adc-461a-833d-fb7835fa1e8c] Running
	I0507 22:48:09.334392  672811 system_pods.go:89] "storage-provisioner" [ceded4eb-c33a-4e8f-a94f-676277e21e9e] Running
	I0507 22:48:09.334407  672811 retry.go:31] will retry after 40.022161579s: missing components: kube-dns
	I0507 22:48:49.361787  672811 system_pods.go:86] 7 kube-system pods found
	I0507 22:48:49.361828  672811 system_pods.go:89] "coredns-74ff55c5b-g7c7z" [600662d5-0810-4cc4-9c1c-948a98a998f7] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0507 22:48:49.361835  672811 system_pods.go:89] "etcd-kubenet-20210507224052-391940" [eea168ee-4c8e-43a6-8108-52967320ef6a] Running
	I0507 22:48:49.361841  672811 system_pods.go:89] "kube-apiserver-kubenet-20210507224052-391940" [f39cdc15-ebd6-4a97-99e4-756feccc052a] Running
	I0507 22:48:49.361846  672811 system_pods.go:89] "kube-controller-manager-kubenet-20210507224052-391940" [2ae19e1a-5f7a-4618-9cd2-d47d424f72f5] Running
	I0507 22:48:49.361849  672811 system_pods.go:89] "kube-proxy-52sqc" [643a0bba-fae2-4c11-a3f8-3b60b0749613] Running
	I0507 22:48:49.361856  672811 system_pods.go:89] "kube-scheduler-kubenet-20210507224052-391940" [09fd6984-6adc-461a-833d-fb7835fa1e8c] Running
	I0507 22:48:49.361860  672811 system_pods.go:89] "storage-provisioner" [ceded4eb-c33a-4e8f-a94f-676277e21e9e] Running
	I0507 22:48:49.361874  672811 retry.go:31] will retry after 37.970670965s: missing components: kube-dns
	I0507 22:49:27.337225  672811 system_pods.go:86] 7 kube-system pods found
	I0507 22:49:27.337262  672811 system_pods.go:89] "coredns-74ff55c5b-g7c7z" [600662d5-0810-4cc4-9c1c-948a98a998f7] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0507 22:49:27.337269  672811 system_pods.go:89] "etcd-kubenet-20210507224052-391940" [eea168ee-4c8e-43a6-8108-52967320ef6a] Running
	I0507 22:49:27.337276  672811 system_pods.go:89] "kube-apiserver-kubenet-20210507224052-391940" [f39cdc15-ebd6-4a97-99e4-756feccc052a] Running
	I0507 22:49:27.337280  672811 system_pods.go:89] "kube-controller-manager-kubenet-20210507224052-391940" [2ae19e1a-5f7a-4618-9cd2-d47d424f72f5] Running
	I0507 22:49:27.337284  672811 system_pods.go:89] "kube-proxy-52sqc" [643a0bba-fae2-4c11-a3f8-3b60b0749613] Running
	I0507 22:49:27.337289  672811 system_pods.go:89] "kube-scheduler-kubenet-20210507224052-391940" [09fd6984-6adc-461a-833d-fb7835fa1e8c] Running
	I0507 22:49:27.337292  672811 system_pods.go:89] "storage-provisioner" [ceded4eb-c33a-4e8f-a94f-676277e21e9e] Running
	I0507 22:49:27.337304  672811 retry.go:31] will retry after 47.568379235s: missing components: kube-dns
	I0507 22:50:14.911358  672811 system_pods.go:86] 7 kube-system pods found
	I0507 22:50:14.911396  672811 system_pods.go:89] "coredns-74ff55c5b-g7c7z" [600662d5-0810-4cc4-9c1c-948a98a998f7] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0507 22:50:14.911404  672811 system_pods.go:89] "etcd-kubenet-20210507224052-391940" [eea168ee-4c8e-43a6-8108-52967320ef6a] Running
	I0507 22:50:14.911411  672811 system_pods.go:89] "kube-apiserver-kubenet-20210507224052-391940" [f39cdc15-ebd6-4a97-99e4-756feccc052a] Running
	I0507 22:50:14.911415  672811 system_pods.go:89] "kube-controller-manager-kubenet-20210507224052-391940" [2ae19e1a-5f7a-4618-9cd2-d47d424f72f5] Running
	I0507 22:50:14.911419  672811 system_pods.go:89] "kube-proxy-52sqc" [643a0bba-fae2-4c11-a3f8-3b60b0749613] Running
	I0507 22:50:14.911423  672811 system_pods.go:89] "kube-scheduler-kubenet-20210507224052-391940" [09fd6984-6adc-461a-833d-fb7835fa1e8c] Running
	I0507 22:50:14.911428  672811 system_pods.go:89] "storage-provisioner" [ceded4eb-c33a-4e8f-a94f-676277e21e9e] Running
	I0507 22:50:14.911439  672811 retry.go:31] will retry after 1m7.577191067s: missing components: kube-dns
	I0507 22:51:22.494081  672811 system_pods.go:86] 7 kube-system pods found
	I0507 22:51:22.494122  672811 system_pods.go:89] "coredns-74ff55c5b-g7c7z" [600662d5-0810-4cc4-9c1c-948a98a998f7] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0507 22:51:22.494130  672811 system_pods.go:89] "etcd-kubenet-20210507224052-391940" [eea168ee-4c8e-43a6-8108-52967320ef6a] Running
	I0507 22:51:22.494136  672811 system_pods.go:89] "kube-apiserver-kubenet-20210507224052-391940" [f39cdc15-ebd6-4a97-99e4-756feccc052a] Running
	I0507 22:51:22.494141  672811 system_pods.go:89] "kube-controller-manager-kubenet-20210507224052-391940" [2ae19e1a-5f7a-4618-9cd2-d47d424f72f5] Running
	I0507 22:51:22.494144  672811 system_pods.go:89] "kube-proxy-52sqc" [643a0bba-fae2-4c11-a3f8-3b60b0749613] Running
	I0507 22:51:22.494148  672811 system_pods.go:89] "kube-scheduler-kubenet-20210507224052-391940" [09fd6984-6adc-461a-833d-fb7835fa1e8c] Running
	I0507 22:51:22.494153  672811 system_pods.go:89] "storage-provisioner" [ceded4eb-c33a-4e8f-a94f-676277e21e9e] Running
	I0507 22:51:22.496731  672811 out.go:170] 
	W0507 22:51:22.496964  672811 out.go:235] X Exiting due to GUEST_START: wait 5m0s for node: waiting for apps_running: expected k8s-apps: missing components: kube-dns
	X Exiting due to GUEST_START: wait 5m0s for node: waiting for apps_running: expected k8s-apps: missing components: kube-dns
	W0507 22:51:22.496978  672811 out.go:424] no arguments passed for "* \n" - returning raw string
	W0507 22:51:22.496984  672811 out.go:235] * 
	* 
	W0507 22:51:22.496995  672811 out.go:424] no arguments passed for "* If the above advice does not help, please let us know:\n" - returning raw string
	W0507 22:51:22.497001  672811 out.go:424] no arguments passed for "  https://github.com/kubernetes/minikube/issues/new/choose\n\n" - returning raw string
	W0507 22:51:22.497005  672811 out.go:424] no arguments passed for "* Please attach the following file to the GitHub issue:\n" - returning raw string
	W0507 22:51:22.497050  672811 out.go:424] no arguments passed for "* If the above advice does not help, please let us know:\n  https://github.com/kubernetes/minikube/issues/new/choose\n\n* Please attach the following file to the GitHub issue:\n* - /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/logs/lastStart.txt\n\n" - returning raw string
	W0507 22:51:22.498864  672811 out.go:235] ╭────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	╭────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	W0507 22:51:22.498879  672811 out.go:235] │                                                                                                                                                                │
	│                                                                                                                                                                │
	W0507 22:51:22.498885  672811 out.go:235] │    * If the above advice does not help, please let us know:                                                                                                    │
	│    * If the above advice does not help, please let us know:                                                                                                    │
	W0507 22:51:22.498891  672811 out.go:235] │      https://github.com/kubernetes/minikube/issues/new/choose                                                                                                  │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                                                                  │
	W0507 22:51:22.498898  672811 out.go:235] │                                                                                                                                                                │
	│                                                                                                                                                                │
	W0507 22:51:22.498906  672811 out.go:235] │    * Please attach the following file to the GitHub issue:                                                                                                     │
	│    * Please attach the following file to the GitHub issue:                                                                                                     │
	W0507 22:51:22.498917  672811 out.go:235] │    * - /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/logs/lastStart.txt    │
	│    * - /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/logs/lastStart.txt    │
	W0507 22:51:22.498930  672811 out.go:235] │                                                                                                                                                                │
	│                                                                                                                                                                │
	W0507 22:51:22.498941  672811 out.go:235] ╰────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	╰────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
	W0507 22:51:22.498954  672811 out.go:235] 
	
	I0507 22:51:22.500255  672811 out.go:170] 

                                                
                                                
** /stderr **
net_test.go:85: failed start: exit status 80
--- FAIL: TestNetworkPlugins/group/kubenet/Start (629.71s)

                                                
                                    

Test pass (222/247)

Order passed test Duration
3 TestDownloadOnly/v1.14.0/json-events 8.96
4 TestDownloadOnly/v1.14.0/preload-exists 0
6 TestDownloadOnly/v1.14.0/binaries 0
8 TestDownloadOnly/v1.14.0/LogsDuration 0.08
10 TestDownloadOnly/v1.20.2/json-events 9.17
11 TestDownloadOnly/v1.20.2/preload-exists 0
13 TestDownloadOnly/v1.20.2/binaries 0
15 TestDownloadOnly/v1.20.2/LogsDuration 0.07
17 TestDownloadOnly/v1.22.0-alpha.1/json-events 17.28
18 TestDownloadOnly/v1.22.0-alpha.1/preload-exists 0
20 TestDownloadOnly/v1.22.0-alpha.1/binaries 0
22 TestDownloadOnly/v1.22.0-alpha.1/LogsDuration 0.07
23 TestDownloadOnly/DeleteAll 1.66
24 TestDownloadOnly/DeleteAlwaysSucceeds 0.3
25 TestDownloadOnlyKic 4.09
26 TestOffline 169.58
29 TestAddons/parallel/Registry 17.29
30 TestAddons/parallel/Ingress 40.37
31 TestAddons/parallel/MetricsServer 5.7
32 TestAddons/parallel/HelmTiller 9.39
34 TestAddons/parallel/CSI 64.09
35 TestAddons/parallel/GCPAuth 48.9
36 TestCertOptions 48.61
38 TestForceSystemdFlag 70.58
39 TestForceSystemdEnv 48.31
44 TestErrorSpam/start 54.18
45 TestErrorSpam/status 30.62
46 TestErrorSpam/pause 2.18
47 TestErrorSpam/unpause 0.53
48 TestErrorSpam/stop 20.71
51 TestFunctional/serial/CopySyncFile 0
52 TestFunctional/serial/StartWithProxy 134.51
53 TestFunctional/serial/AuditLog 0
54 TestFunctional/serial/SoftStart 15.3
55 TestFunctional/serial/KubeContext 0.04
56 TestFunctional/serial/KubectlGetPods 0.2
59 TestFunctional/serial/CacheCmd/cache/add_remote 3.13
60 TestFunctional/serial/CacheCmd/cache/add_local 1.1
61 TestFunctional/serial/CacheCmd/cache/delete_k8s.gcr.io/pause:3.3 0.06
62 TestFunctional/serial/CacheCmd/cache/list 0.06
63 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.28
64 TestFunctional/serial/CacheCmd/cache/cache_reload 1.99
65 TestFunctional/serial/CacheCmd/cache/delete 0.12
66 TestFunctional/serial/MinikubeKubectlCmd 0.12
67 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.12
68 TestFunctional/serial/ExtraConfig 114.08
69 TestFunctional/serial/ComponentHealth 0.07
71 TestFunctional/parallel/ConfigCmd 0.45
72 TestFunctional/parallel/DashboardCmd 5.28
73 TestFunctional/parallel/DryRun 0.61
74 TestFunctional/parallel/StatusCmd 1.01
75 TestFunctional/parallel/LogsCmd 2.02
76 TestFunctional/parallel/LogsFileCmd 2.02
77 TestFunctional/parallel/MountCmd 6.88
79 TestFunctional/parallel/ServiceCmd 12.62
80 TestFunctional/parallel/AddonsCmd 0.19
81 TestFunctional/parallel/PersistentVolumeClaim 35.71
83 TestFunctional/parallel/SSHCmd 0.56
84 TestFunctional/parallel/CpCmd 0.55
85 TestFunctional/parallel/MySQL 18.94
86 TestFunctional/parallel/FileSync 0.29
87 TestFunctional/parallel/CertSync 0.94
91 TestFunctional/parallel/NodeLabels 0.08
92 TestFunctional/parallel/LoadImage 2.25
93 TestFunctional/parallel/RemoveImage 2.2
94 TestFunctional/parallel/BuildImage 2.77
95 TestFunctional/parallel/ListImages 0.26
96 TestFunctional/parallel/UpdateContextCmd/no_changes 0.12
97 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.12
98 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.11
99 TestFunctional/parallel/ProfileCmd/profile_not_create 0.58
100 TestFunctional/parallel/ProfileCmd/profile_list 0.57
101 TestFunctional/parallel/ProfileCmd/profile_json_output 0.45
103 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0
105 TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP 0.05
106 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0
110 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.11
111 TestFunctional/delete_busybox_image 0.08
112 TestFunctional/delete_my-image_image 0.04
113 TestFunctional/delete_minikube_cached_images 0.04
117 TestJSONOutput/start/Audit 0
119 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
120 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
122 TestJSONOutput/pause/Audit 0
124 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
125 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
127 TestJSONOutput/unpause/Audit 0
129 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
130 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
132 TestJSONOutput/stop/Audit 0
134 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
135 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
136 TestErrorJSONOutput 0.41
138 TestKicCustomNetwork/create_custom_network 28.54
139 TestKicCustomNetwork/use_default_bridge_network 25.23
140 TestKicExistingNetwork 24.66
141 TestMainNoArgs 0.06
144 TestMultiNode/serial/FreshStart2Nodes 177.4
145 TestMultiNode/serial/DeployApp2Nodes 4.35
146 TestMultiNode/serial/AddNode 43.59
147 TestMultiNode/serial/ProfileList 0.3
148 TestMultiNode/serial/StopNode 2.45
149 TestMultiNode/serial/StartAfterStop 35.46
150 TestMultiNode/serial/DeleteNode 5.48
151 TestMultiNode/serial/StopMultiNode 41.55
152 TestMultiNode/serial/RestartMultiNode 149.83
153 TestMultiNode/serial/ValidateNameConflict 49.38
159 TestDebPackageInstall/install_amd64_debian:sid/minikube 0
160 TestDebPackageInstall/install_amd64_debian:sid/kvm2-driver 10.69
162 TestDebPackageInstall/install_amd64_debian:latest/minikube 0
163 TestDebPackageInstall/install_amd64_debian:latest/kvm2-driver 9.79
165 TestDebPackageInstall/install_amd64_debian:10/minikube 0
166 TestDebPackageInstall/install_amd64_debian:10/kvm2-driver 9.72
168 TestDebPackageInstall/install_amd64_debian:9/minikube 0
169 TestDebPackageInstall/install_amd64_debian:9/kvm2-driver 8.11
171 TestDebPackageInstall/install_amd64_ubuntu:latest/minikube 0
172 TestDebPackageInstall/install_amd64_ubuntu:latest/kvm2-driver 16.03
174 TestDebPackageInstall/install_amd64_ubuntu:20.10/minikube 0
175 TestDebPackageInstall/install_amd64_ubuntu:20.10/kvm2-driver 13.33
177 TestDebPackageInstall/install_amd64_ubuntu:20.04/minikube 0
178 TestDebPackageInstall/install_amd64_ubuntu:20.04/kvm2-driver 14.52
180 TestDebPackageInstall/install_amd64_ubuntu:18.04/minikube 0
181 TestDebPackageInstall/install_amd64_ubuntu:18.04/kvm2-driver 12.61
182 TestPreload 133.16
184 TestScheduledStopUnix 71.25
187 TestInsufficientStorage 8.88
188 TestRunningBinaryUpgrade 134.17
190 TestKubernetesUpgrade 183.61
191 TestMissingContainerUpgrade 359.29
195 TestPause/serial/Start 167.34
212 TestPause/serial/SecondStartNoReconfiguration 17.15
213 TestPause/serial/Pause 0.67
214 TestPause/serial/Unpause 0.79
215 TestPause/serial/PauseAgain 7.51
217 TestStoppedBinaryUpgrade/MinikubeLogs 0.73
219 TestStartStop/group/old-k8s-version/serial/FirstStart 132.55
221 TestStartStop/group/no-preload/serial/FirstStart 74.58
222 TestStartStop/group/no-preload/serial/DeployApp 9.49
223 TestStartStop/group/no-preload/serial/Stop 20.66
224 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.2
225 TestStartStop/group/no-preload/serial/SecondStart 70.07
226 TestStartStop/group/old-k8s-version/serial/DeployApp 9.59
227 TestStartStop/group/old-k8s-version/serial/Stop 20.88
228 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.2
229 TestStartStop/group/old-k8s-version/serial/SecondStart 117.98
230 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 5.02
231 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 5.01
232 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.29
233 TestStartStop/group/no-preload/serial/Pause 2.48
235 TestStartStop/group/embed-certs/serial/FirstStart 133.81
237 TestStartStop/group/default-k8s-different-port/serial/FirstStart 146.99
238 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 5.02
239 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 5.03
240 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 1.35
241 TestStartStop/group/old-k8s-version/serial/Pause 4.6
243 TestStartStop/group/newest-cni/serial/FirstStart 65.87
244 TestStartStop/group/embed-certs/serial/DeployApp 8.39
245 TestStartStop/group/embed-certs/serial/Stop 20.94
246 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.23
247 TestStartStop/group/embed-certs/serial/SecondStart 110.47
248 TestStartStop/group/newest-cni/serial/DeployApp 0
249 TestStartStop/group/newest-cni/serial/Stop 1.36
250 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.2
251 TestStartStop/group/newest-cni/serial/SecondStart 67.8
252 TestStartStop/group/default-k8s-different-port/serial/DeployApp 9.54
253 TestStartStop/group/default-k8s-different-port/serial/Stop 25.25
254 TestStartStop/group/default-k8s-different-port/serial/EnableAddonAfterStop 0.22
255 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
256 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
257 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.31
258 TestStartStop/group/default-k8s-different-port/serial/SecondStart 114.27
259 TestStartStop/group/newest-cni/serial/Pause 2.25
260 TestNetworkPlugins/group/auto/Start 147.97
261 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 5.72
262 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 6.31
263 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.29
264 TestStartStop/group/embed-certs/serial/Pause 2.55
266 TestStartStop/group/default-k8s-different-port/serial/UserAppExistsAfterStop 5.01
267 TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop 5.01
268 TestStartStop/group/default-k8s-different-port/serial/VerifyKubernetesImages 0.3
269 TestStartStop/group/default-k8s-different-port/serial/Pause 2.56
270 TestNetworkPlugins/group/cilium/Start 140.72
271 TestNetworkPlugins/group/auto/KubeletFlags 0.29
272 TestNetworkPlugins/group/auto/NetCatPod 10.24
273 TestNetworkPlugins/group/auto/DNS 160.51
274 TestNetworkPlugins/group/cilium/ControllerPod 5.02
275 TestNetworkPlugins/group/cilium/KubeletFlags 0.29
276 TestNetworkPlugins/group/cilium/NetCatPod 8.42
277 TestNetworkPlugins/group/cilium/DNS 0.15
278 TestNetworkPlugins/group/cilium/Localhost 0.16
279 TestNetworkPlugins/group/cilium/HairPin 0.14
280 TestNetworkPlugins/group/calico/Start 145.5
281 TestPause/serial/VerifyStatus 0.5
282 TestNetworkPlugins/group/custom-weave/Start 152.84
283 TestNetworkPlugins/group/auto/Localhost 0.2
285 TestNetworkPlugins/group/enable-default-cni/Start 136.16
286 TestNetworkPlugins/group/calico/ControllerPod 5.02
287 TestNetworkPlugins/group/calico/KubeletFlags 0.3
288 TestNetworkPlugins/group/calico/NetCatPod 9.27
289 TestNetworkPlugins/group/custom-weave/KubeletFlags 0.29
290 TestNetworkPlugins/group/custom-weave/NetCatPod 8.43
291 TestNetworkPlugins/group/calico/DNS 0.18
292 TestNetworkPlugins/group/calico/Localhost 0.16
293 TestNetworkPlugins/group/calico/HairPin 0.15
294 TestNetworkPlugins/group/kindnet/Start 122.6
295 TestNetworkPlugins/group/bridge/Start 163.09
296 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.3
297 TestNetworkPlugins/group/enable-default-cni/NetCatPod 18.27
298 TestNetworkPlugins/group/enable-default-cni/DNS 0.17
299 TestNetworkPlugins/group/enable-default-cni/Localhost 0.15
300 TestNetworkPlugins/group/enable-default-cni/HairPin 0.17
302 TestNetworkPlugins/group/kindnet/ControllerPod 5.02
303 TestNetworkPlugins/group/kindnet/KubeletFlags 0.29
304 TestNetworkPlugins/group/kindnet/NetCatPod 9.26
305 TestNetworkPlugins/group/kindnet/DNS 0.15
306 TestNetworkPlugins/group/kindnet/Localhost 0.14
307 TestNetworkPlugins/group/kindnet/HairPin 0.15
308 TestNetworkPlugins/group/bridge/KubeletFlags 0.29
309 TestNetworkPlugins/group/bridge/NetCatPod 8.25
310 TestNetworkPlugins/group/bridge/DNS 0.16
311 TestNetworkPlugins/group/bridge/Localhost 0.13
312 TestNetworkPlugins/group/bridge/HairPin 0.16
x
+
TestDownloadOnly/v1.14.0/json-events (8.96s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.14.0/json-events
aaa_download_only_test.go:69: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-20210507214926-391940 --force --alsologtostderr --kubernetes-version=v1.14.0 --container-runtime=containerd --driver=docker  --container-runtime=containerd
aaa_download_only_test.go:69: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-20210507214926-391940 --force --alsologtostderr --kubernetes-version=v1.14.0 --container-runtime=containerd --driver=docker  --container-runtime=containerd: (8.962092871s)
--- PASS: TestDownloadOnly/v1.14.0/json-events (8.96s)

                                                
                                    
x
+
TestDownloadOnly/v1.14.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.14.0/preload-exists
--- PASS: TestDownloadOnly/v1.14.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.14.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.14.0/binaries
--- PASS: TestDownloadOnly/v1.14.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.14.0/LogsDuration (0.08s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.14.0/LogsDuration
aaa_download_only_test.go:166: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-20210507214926-391940
aaa_download_only_test.go:166: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-20210507214926-391940: exit status 85 (76.081594ms)

                                                
                                                
-- stdout --
	* 
	* ==> Audit <==
	* |---------|------|---------|------|---------|------------|----------|
	| Command | Args | Profile | User | Version | Start Time | End Time |
	|---------|------|---------|------|---------|------------|----------|
	|---------|------|---------|------|---------|------------|----------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2021/05/07 21:49:26
	Running on machine: debian-jenkins-agent-11
	Binary: Built with gc go1.16.1 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0507 21:49:26.240597  391953 out.go:291] Setting OutFile to fd 1 ...
	I0507 21:49:26.240722  391953 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0507 21:49:26.240730  391953 out.go:304] Setting ErrFile to fd 2...
	I0507 21:49:26.240733  391953 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0507 21:49:26.240821  391953 root.go:316] Updating PATH: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/bin
	W0507 21:49:26.240919  391953 root.go:291] Error reading config file at /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/config/config.json: open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/config/config.json: no such file or directory
	I0507 21:49:26.241144  391953 out.go:298] Setting JSON to true
	I0507 21:49:26.275812  391953 start.go:108] hostinfo: {"hostname":"debian-jenkins-agent-11","uptime":8934,"bootTime":1620415232,"procs":148,"os":"linux","platform":"debian","platformFamily":"debian","platformVersion":"9.13","kernelVersion":"4.9.0-15-amd64","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"c29e0b88-ef83-6765-d2fa-208fdce1af32"}
	I0507 21:49:26.275917  391953 start.go:118] virtualization: kvm guest
	I0507 21:49:26.279810  391953 notify.go:169] Checking for updates...
	W0507 21:49:26.280053  391953 out.go:424] no arguments passed for "minikube skips various validations when --force is supplied; this may lead to unexpected behavior\n" - returning raw string
	I0507 21:49:26.281807  391953 driver.go:322] Setting default libvirt URI to qemu:///system
	I0507 21:49:26.326522  391953 docker.go:119] docker version: linux-19.03.15
	I0507 21:49:26.326618  391953 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	I0507 21:49:26.402784  391953 info.go:261] docker info: {ID:LQL6:IQEY:TAE3:NG47:ROZQ:WA5O:XM2B:XDCN:3VXZ:7JF3:4DHA:WN5N Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:131 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:false KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:37 OomKillDisable:true NGoroutines:50 SystemTime:2021-05-07 21:49:26.359237229 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:4.9.0-15-amd64 OperatingSystem:Debian GNU/Linux 9 (stretch) OSType:linux Architecture:x86_64 IndexServerAddr
ess:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33742209024 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:debian-jenkins-agent-11 Labels:[] ExperimentalBuild:false ServerVersion:19.03.15 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:269548fa27e0089a8b8278fc4fc781d7f65a939b Expected:269548fa27e0089a8b8278fc4fc781d7f65a939b} RuncCommit:{ID:ff819c7e9184c13b7c2607fe6c30ae19403a7aff Expected:ff819c7e9184c13b7c2607fe6c30ae19403a7aff} InitCommit:{ID:fec3683 Expected:fec3683} SecurityOptions:[name=seccomp,profile=default] ProductLicense: Warni
ngs:[WARNING: No swap limit support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[] Warnings:<nil>}}
	I0507 21:49:26.402858  391953 docker.go:225] overlay module found
	I0507 21:49:26.405360  391953 start.go:276] selected driver: docker
	I0507 21:49:26.405375  391953 start.go:718] validating driver "docker" against <nil>
	I0507 21:49:26.405820  391953 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	I0507 21:49:26.482619  391953 info.go:261] docker info: {ID:LQL6:IQEY:TAE3:NG47:ROZQ:WA5O:XM2B:XDCN:3VXZ:7JF3:4DHA:WN5N Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:131 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:false KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:37 OomKillDisable:true NGoroutines:50 SystemTime:2021-05-07 21:49:26.439257002 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:4.9.0-15-amd64 OperatingSystem:Debian GNU/Linux 9 (stretch) OSType:linux Architecture:x86_64 IndexServerAddr
ess:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33742209024 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:debian-jenkins-agent-11 Labels:[] ExperimentalBuild:false ServerVersion:19.03.15 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:269548fa27e0089a8b8278fc4fc781d7f65a939b Expected:269548fa27e0089a8b8278fc4fc781d7f65a939b} RuncCommit:{ID:ff819c7e9184c13b7c2607fe6c30ae19403a7aff Expected:ff819c7e9184c13b7c2607fe6c30ae19403a7aff} InitCommit:{ID:fec3683 Expected:fec3683} SecurityOptions:[name=seccomp,profile=default] ProductLicense: Warni
ngs:[WARNING: No swap limit support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[] Warnings:<nil>}}
	I0507 21:49:26.482738  391953 start_flags.go:259] no existing cluster config was found, will generate one from the flags 
	I0507 21:49:26.483226  391953 start_flags.go:314] Using suggested 8000MB memory alloc based on sys=32179MB, container=32179MB
	I0507 21:49:26.483367  391953 start_flags.go:715] Wait components to verify : map[apiserver:true system_pods:true]
	I0507 21:49:26.483429  391953 cni.go:93] Creating CNI manager for ""
	I0507 21:49:26.483497  391953 cni.go:160] "docker" driver + containerd runtime found, recommending kindnet
	I0507 21:49:26.483556  391953 cni.go:217] auto-setting extra-config to "kubelet.cni-conf-dir=/etc/cni/net.mk"
	I0507 21:49:26.483568  391953 cni.go:222] extra-config set to "kubelet.cni-conf-dir=/etc/cni/net.mk"
	I0507 21:49:26.483577  391953 start_flags.go:268] Found "CNI" CNI - setting NetworkPlugin=cni
	I0507 21:49:26.483587  391953 start_flags.go:273] config:
	{Name:download-only-20210507214926-391940 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.22@sha256:7cc3a3cb6e51c628d8ede157ad9e1f797e8d22a1b3cedc12d3f1999cb52f962e Memory:8000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.14.0 ClusterName:download-only-20210507214926-391940 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:cont
ainerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false}
	I0507 21:49:26.486186  391953 cache.go:111] Beginning downloading kic base image for docker with containerd
	W0507 21:49:26.486209  391953 out.go:424] no arguments passed for "Pulling base image ...\n" - returning raw string
	I0507 21:49:26.487972  391953 preload.go:98] Checking if preload exists for k8s version v1.14.0 and runtime containerd
	I0507 21:49:26.488171  391953 image.go:116] Checking for gcr.io/k8s-minikube/kicbase:v0.0.22@sha256:7cc3a3cb6e51c628d8ede157ad9e1f797e8d22a1b3cedc12d3f1999cb52f962e in local cache directory
	I0507 21:49:26.488236  391953 cache.go:134] Downloading gcr.io/k8s-minikube/kicbase:v0.0.22@sha256:7cc3a3cb6e51c628d8ede157ad9e1f797e8d22a1b3cedc12d3f1999cb52f962e to local cache
	I0507 21:49:26.488299  391953 image.go:192] Writing gcr.io/k8s-minikube/kicbase:v0.0.22@sha256:7cc3a3cb6e51c628d8ede157ad9e1f797e8d22a1b3cedc12d3f1999cb52f962e to local cache
	I0507 21:49:26.531021  391953 preload.go:123] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/preloaded-images-k8s-v10-v1.14.0-containerd-overlay2-amd64.tar.lz4
	I0507 21:49:26.531072  391953 cache.go:54] Caching tarball of preloaded images
	I0507 21:49:26.531121  391953 preload.go:98] Checking if preload exists for k8s version v1.14.0 and runtime containerd
	I0507 21:49:26.569691  391953 preload.go:123] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/preloaded-images-k8s-v10-v1.14.0-containerd-overlay2-amd64.tar.lz4
	I0507 21:49:26.572004  391953 preload.go:196] getting checksum for preloaded-images-k8s-v10-v1.14.0-containerd-overlay2-amd64.tar.lz4 ...
	I0507 21:49:26.630245  391953 download.go:78] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/preloaded-images-k8s-v10-v1.14.0-containerd-overlay2-amd64.tar.lz4?checksum=md5:f0bc4335eb1ef39b3e6763fea0899135 -> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v10-v1.14.0-containerd-overlay2-amd64.tar.lz4
	
	* 
	* The control plane node "" does not exist.
	  To start a cluster, run: "minikube start -p download-only-20210507214926-391940"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:167: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.14.0/LogsDuration (0.08s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.2/json-events (9.17s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.2/json-events
aaa_download_only_test.go:69: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-20210507214926-391940 --force --alsologtostderr --kubernetes-version=v1.20.2 --container-runtime=containerd --driver=docker  --container-runtime=containerd
aaa_download_only_test.go:69: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-20210507214926-391940 --force --alsologtostderr --kubernetes-version=v1.20.2 --container-runtime=containerd --driver=docker  --container-runtime=containerd: (9.173878218s)
--- PASS: TestDownloadOnly/v1.20.2/json-events (9.17s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.2/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.2/preload-exists
--- PASS: TestDownloadOnly/v1.20.2/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.2/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.2/binaries
--- PASS: TestDownloadOnly/v1.20.2/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.2/LogsDuration (0.07s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.2/LogsDuration
aaa_download_only_test.go:166: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-20210507214926-391940
aaa_download_only_test.go:166: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-20210507214926-391940: exit status 85 (74.016923ms)

                                                
                                                
-- stdout --
	* 
	* ==> Audit <==
	* |---------|------|---------|------|---------|------------|----------|
	| Command | Args | Profile | User | Version | Start Time | End Time |
	|---------|------|---------|------|---------|------------|----------|
	|---------|------|---------|------|---------|------------|----------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2021/05/07 21:49:35
	Running on machine: debian-jenkins-agent-11
	Binary: Built with gc go1.16.1 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0507 21:49:35.278119  392079 out.go:291] Setting OutFile to fd 1 ...
	I0507 21:49:35.278190  392079 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0507 21:49:35.278197  392079 out.go:304] Setting ErrFile to fd 2...
	I0507 21:49:35.278201  392079 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0507 21:49:35.278287  392079 root.go:316] Updating PATH: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/bin
	W0507 21:49:35.278399  392079 root.go:291] Error reading config file at /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/config/config.json: open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/config/config.json: no such file or directory
	I0507 21:49:35.278521  392079 out.go:298] Setting JSON to true
	I0507 21:49:35.312740  392079 start.go:108] hostinfo: {"hostname":"debian-jenkins-agent-11","uptime":8943,"bootTime":1620415232,"procs":148,"os":"linux","platform":"debian","platformFamily":"debian","platformVersion":"9.13","kernelVersion":"4.9.0-15-amd64","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"c29e0b88-ef83-6765-d2fa-208fdce1af32"}
	I0507 21:49:35.312810  392079 start.go:118] virtualization: kvm guest
	I0507 21:49:35.316111  392079 notify.go:169] Checking for updates...
	W0507 21:49:35.316343  392079 out.go:424] no arguments passed for "minikube skips various validations when --force is supplied; this may lead to unexpected behavior\n" - returning raw string
	W0507 21:49:35.318252  392079 start.go:628] api.Load failed for download-only-20210507214926-391940: filestore "download-only-20210507214926-391940": Docker machine "download-only-20210507214926-391940" does not exist. Use "docker-machine ls" to list machines. Use "docker-machine create" to add a new one.
	I0507 21:49:35.318301  392079 driver.go:322] Setting default libvirt URI to qemu:///system
	W0507 21:49:35.318330  392079 start.go:628] api.Load failed for download-only-20210507214926-391940: filestore "download-only-20210507214926-391940": Docker machine "download-only-20210507214926-391940" does not exist. Use "docker-machine ls" to list machines. Use "docker-machine create" to add a new one.
	I0507 21:49:35.363044  392079 docker.go:119] docker version: linux-19.03.15
	I0507 21:49:35.363135  392079 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	I0507 21:49:35.438143  392079 info.go:261] docker info: {ID:LQL6:IQEY:TAE3:NG47:ROZQ:WA5O:XM2B:XDCN:3VXZ:7JF3:4DHA:WN5N Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:131 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:false KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:37 OomKillDisable:true NGoroutines:50 SystemTime:2021-05-07 21:49:35.395127677 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:4.9.0-15-amd64 OperatingSystem:Debian GNU/Linux 9 (stretch) OSType:linux Architecture:x86_64 IndexServerAddr
ess:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33742209024 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:debian-jenkins-agent-11 Labels:[] ExperimentalBuild:false ServerVersion:19.03.15 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:269548fa27e0089a8b8278fc4fc781d7f65a939b Expected:269548fa27e0089a8b8278fc4fc781d7f65a939b} RuncCommit:{ID:ff819c7e9184c13b7c2607fe6c30ae19403a7aff Expected:ff819c7e9184c13b7c2607fe6c30ae19403a7aff} InitCommit:{ID:fec3683 Expected:fec3683} SecurityOptions:[name=seccomp,profile=default] ProductLicense: Warni
ngs:[WARNING: No swap limit support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[] Warnings:<nil>}}
	I0507 21:49:35.438233  392079 docker.go:225] overlay module found
	I0507 21:49:35.440463  392079 start.go:276] selected driver: docker
	I0507 21:49:35.440480  392079 start.go:718] validating driver "docker" against &{Name:download-only-20210507214926-391940 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.22@sha256:7cc3a3cb6e51c628d8ede157ad9e1f797e8d22a1b3cedc12d3f1999cb52f962e Memory:8000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.14.0 ClusterName:download-only-20210507214926-391940 Namespace:default APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.14.0 ControlPlane:true Worker:true}] Addons:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false}
	I0507 21:49:35.440948  392079 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	I0507 21:49:35.514985  392079 info.go:261] docker info: {ID:LQL6:IQEY:TAE3:NG47:ROZQ:WA5O:XM2B:XDCN:3VXZ:7JF3:4DHA:WN5N Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:131 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:false KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:37 OomKillDisable:true NGoroutines:50 SystemTime:2021-05-07 21:49:35.47337244 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:4.9.0-15-amd64 OperatingSystem:Debian GNU/Linux 9 (stretch) OSType:linux Architecture:x86_64 IndexServerAddre
ss:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33742209024 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:debian-jenkins-agent-11 Labels:[] ExperimentalBuild:false ServerVersion:19.03.15 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:269548fa27e0089a8b8278fc4fc781d7f65a939b Expected:269548fa27e0089a8b8278fc4fc781d7f65a939b} RuncCommit:{ID:ff819c7e9184c13b7c2607fe6c30ae19403a7aff Expected:ff819c7e9184c13b7c2607fe6c30ae19403a7aff} InitCommit:{ID:fec3683 Expected:fec3683} SecurityOptions:[name=seccomp,profile=default] ProductLicense: Warnin
gs:[WARNING: No swap limit support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[] Warnings:<nil>}}
	I0507 21:49:35.515489  392079 cni.go:93] Creating CNI manager for ""
	I0507 21:49:35.515538  392079 cni.go:160] "docker" driver + containerd runtime found, recommending kindnet
	I0507 21:49:35.515555  392079 start_flags.go:273] config:
	{Name:download-only-20210507214926-391940 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.22@sha256:7cc3a3cb6e51c628d8ede157ad9e1f797e8d22a1b3cedc12d3f1999cb52f962e Memory:8000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.2 ClusterName:download-only-20210507214926-391940 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:cont
ainerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.14.0 ControlPlane:true Worker:true}] Addons:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false}
	I0507 21:49:35.517631  392079 cache.go:111] Beginning downloading kic base image for docker with containerd
	W0507 21:49:35.517648  392079 out.go:424] no arguments passed for "Pulling base image ...\n" - returning raw string
	I0507 21:49:35.519159  392079 preload.go:98] Checking if preload exists for k8s version v1.20.2 and runtime containerd
	I0507 21:49:35.519228  392079 image.go:116] Checking for gcr.io/k8s-minikube/kicbase:v0.0.22@sha256:7cc3a3cb6e51c628d8ede157ad9e1f797e8d22a1b3cedc12d3f1999cb52f962e in local cache directory
	I0507 21:49:35.519262  392079 image.go:119] Found gcr.io/k8s-minikube/kicbase:v0.0.22@sha256:7cc3a3cb6e51c628d8ede157ad9e1f797e8d22a1b3cedc12d3f1999cb52f962e in local cache directory, skipping pull
	I0507 21:49:35.519275  392079 cache.go:131] gcr.io/k8s-minikube/kicbase:v0.0.22@sha256:7cc3a3cb6e51c628d8ede157ad9e1f797e8d22a1b3cedc12d3f1999cb52f962e exists in cache, skipping pull
	I0507 21:49:35.563202  392079 preload.go:123] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/preloaded-images-k8s-v10-v1.20.2-containerd-overlay2-amd64.tar.lz4
	I0507 21:49:35.563219  392079 cache.go:54] Caching tarball of preloaded images
	I0507 21:49:35.563242  392079 preload.go:98] Checking if preload exists for k8s version v1.20.2 and runtime containerd
	I0507 21:49:35.607628  392079 preload.go:123] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/preloaded-images-k8s-v10-v1.20.2-containerd-overlay2-amd64.tar.lz4
	I0507 21:49:35.609478  392079 preload.go:196] getting checksum for preloaded-images-k8s-v10-v1.20.2-containerd-overlay2-amd64.tar.lz4 ...
	I0507 21:49:35.666680  392079 download.go:78] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/preloaded-images-k8s-v10-v1.20.2-containerd-overlay2-amd64.tar.lz4?checksum=md5:02e256ea4a3f6e9463b63c57de8e1682 -> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v10-v1.20.2-containerd-overlay2-amd64.tar.lz4
	
	* 
	* The control plane node "" does not exist.
	  To start a cluster, run: "minikube start -p download-only-20210507214926-391940"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:167: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.20.2/LogsDuration (0.07s)

                                                
                                    
x
+
TestDownloadOnly/v1.22.0-alpha.1/json-events (17.28s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.22.0-alpha.1/json-events
aaa_download_only_test.go:69: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-20210507214926-391940 --force --alsologtostderr --kubernetes-version=v1.22.0-alpha.1 --container-runtime=containerd --driver=docker  --container-runtime=containerd
aaa_download_only_test.go:69: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-20210507214926-391940 --force --alsologtostderr --kubernetes-version=v1.22.0-alpha.1 --container-runtime=containerd --driver=docker  --container-runtime=containerd: (17.277133475s)
--- PASS: TestDownloadOnly/v1.22.0-alpha.1/json-events (17.28s)

                                                
                                    
x
+
TestDownloadOnly/v1.22.0-alpha.1/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.22.0-alpha.1/preload-exists
--- PASS: TestDownloadOnly/v1.22.0-alpha.1/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.22.0-alpha.1/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.22.0-alpha.1/binaries
--- PASS: TestDownloadOnly/v1.22.0-alpha.1/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.22.0-alpha.1/LogsDuration (0.07s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.22.0-alpha.1/LogsDuration
aaa_download_only_test.go:166: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-20210507214926-391940
aaa_download_only_test.go:166: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-20210507214926-391940: exit status 85 (73.621095ms)

                                                
                                                
-- stdout --
	* 
	* ==> Audit <==
	* |---------|------|---------|------|---------|------------|----------|
	| Command | Args | Profile | User | Version | Start Time | End Time |
	|---------|------|---------|------|---------|------------|----------|
	|---------|------|---------|------|---------|------------|----------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2021/05/07 21:49:44
	Running on machine: debian-jenkins-agent-11
	Binary: Built with gc go1.16.1 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0507 21:49:44.526889  392207 out.go:291] Setting OutFile to fd 1 ...
	I0507 21:49:44.527063  392207 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0507 21:49:44.527072  392207 out.go:304] Setting ErrFile to fd 2...
	I0507 21:49:44.527075  392207 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0507 21:49:44.527153  392207 root.go:316] Updating PATH: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/bin
	W0507 21:49:44.527252  392207 root.go:291] Error reading config file at /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/config/config.json: open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/config/config.json: no such file or directory
	I0507 21:49:44.527345  392207 out.go:298] Setting JSON to true
	I0507 21:49:44.561343  392207 start.go:108] hostinfo: {"hostname":"debian-jenkins-agent-11","uptime":8952,"bootTime":1620415232,"procs":148,"os":"linux","platform":"debian","platformFamily":"debian","platformVersion":"9.13","kernelVersion":"4.9.0-15-amd64","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"c29e0b88-ef83-6765-d2fa-208fdce1af32"}
	I0507 21:49:44.561442  392207 start.go:118] virtualization: kvm guest
	I0507 21:49:44.564145  392207 notify.go:169] Checking for updates...
	W0507 21:49:44.564405  392207 out.go:424] no arguments passed for "minikube skips various validations when --force is supplied; this may lead to unexpected behavior\n" - returning raw string
	W0507 21:49:44.566338  392207 start.go:628] api.Load failed for download-only-20210507214926-391940: filestore "download-only-20210507214926-391940": Docker machine "download-only-20210507214926-391940" does not exist. Use "docker-machine ls" to list machines. Use "docker-machine create" to add a new one.
	I0507 21:49:44.566426  392207 driver.go:322] Setting default libvirt URI to qemu:///system
	W0507 21:49:44.566472  392207 start.go:628] api.Load failed for download-only-20210507214926-391940: filestore "download-only-20210507214926-391940": Docker machine "download-only-20210507214926-391940" does not exist. Use "docker-machine ls" to list machines. Use "docker-machine create" to add a new one.
	I0507 21:49:44.609065  392207 docker.go:119] docker version: linux-19.03.15
	I0507 21:49:44.609161  392207 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	I0507 21:49:44.683312  392207 info.go:261] docker info: {ID:LQL6:IQEY:TAE3:NG47:ROZQ:WA5O:XM2B:XDCN:3VXZ:7JF3:4DHA:WN5N Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:131 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:false KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:37 OomKillDisable:true NGoroutines:50 SystemTime:2021-05-07 21:49:44.640628189 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:4.9.0-15-amd64 OperatingSystem:Debian GNU/Linux 9 (stretch) OSType:linux Architecture:x86_64 IndexServerAddr
ess:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33742209024 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:debian-jenkins-agent-11 Labels:[] ExperimentalBuild:false ServerVersion:19.03.15 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:269548fa27e0089a8b8278fc4fc781d7f65a939b Expected:269548fa27e0089a8b8278fc4fc781d7f65a939b} RuncCommit:{ID:ff819c7e9184c13b7c2607fe6c30ae19403a7aff Expected:ff819c7e9184c13b7c2607fe6c30ae19403a7aff} InitCommit:{ID:fec3683 Expected:fec3683} SecurityOptions:[name=seccomp,profile=default] ProductLicense: Warni
ngs:[WARNING: No swap limit support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[] Warnings:<nil>}}
	I0507 21:49:44.683387  392207 docker.go:225] overlay module found
	I0507 21:49:44.685776  392207 start.go:276] selected driver: docker
	I0507 21:49:44.685794  392207 start.go:718] validating driver "docker" against &{Name:download-only-20210507214926-391940 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.22@sha256:7cc3a3cb6e51c628d8ede157ad9e1f797e8d22a1b3cedc12d3f1999cb52f962e Memory:8000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.2 ClusterName:download-only-20210507214926-391940 Namespace:default APIServerName:minikubeCA
APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.2 ControlPlane:true Worker:true}] Addons:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false}
	I0507 21:49:44.686259  392207 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	I0507 21:49:44.760085  392207 info.go:261] docker info: {ID:LQL6:IQEY:TAE3:NG47:ROZQ:WA5O:XM2B:XDCN:3VXZ:7JF3:4DHA:WN5N Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:131 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:false KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:37 OomKillDisable:true NGoroutines:50 SystemTime:2021-05-07 21:49:44.718283708 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:4.9.0-15-amd64 OperatingSystem:Debian GNU/Linux 9 (stretch) OSType:linux Architecture:x86_64 IndexServerAddr
ess:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33742209024 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:debian-jenkins-agent-11 Labels:[] ExperimentalBuild:false ServerVersion:19.03.15 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:269548fa27e0089a8b8278fc4fc781d7f65a939b Expected:269548fa27e0089a8b8278fc4fc781d7f65a939b} RuncCommit:{ID:ff819c7e9184c13b7c2607fe6c30ae19403a7aff Expected:ff819c7e9184c13b7c2607fe6c30ae19403a7aff} InitCommit:{ID:fec3683 Expected:fec3683} SecurityOptions:[name=seccomp,profile=default] ProductLicense: Warni
ngs:[WARNING: No swap limit support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[] Warnings:<nil>}}
	I0507 21:49:44.760807  392207 cni.go:93] Creating CNI manager for ""
	I0507 21:49:44.760829  392207 cni.go:160] "docker" driver + containerd runtime found, recommending kindnet
	I0507 21:49:44.760844  392207 start_flags.go:273] config:
	{Name:download-only-20210507214926-391940 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.22@sha256:7cc3a3cb6e51c628d8ede157ad9e1f797e8d22a1b3cedc12d3f1999cb52f962e Memory:8000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.22.0-alpha.1 ClusterName:download-only-20210507214926-391940 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRunt
ime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.2 ControlPlane:true Worker:true}] Addons:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false}
	I0507 21:49:44.763148  392207 cache.go:111] Beginning downloading kic base image for docker with containerd
	W0507 21:49:44.763172  392207 out.go:424] no arguments passed for "Pulling base image ...\n" - returning raw string
	I0507 21:49:44.764668  392207 preload.go:98] Checking if preload exists for k8s version v1.22.0-alpha.1 and runtime containerd
	I0507 21:49:44.764716  392207 image.go:116] Checking for gcr.io/k8s-minikube/kicbase:v0.0.22@sha256:7cc3a3cb6e51c628d8ede157ad9e1f797e8d22a1b3cedc12d3f1999cb52f962e in local cache directory
	I0507 21:49:44.764741  392207 image.go:119] Found gcr.io/k8s-minikube/kicbase:v0.0.22@sha256:7cc3a3cb6e51c628d8ede157ad9e1f797e8d22a1b3cedc12d3f1999cb52f962e in local cache directory, skipping pull
	I0507 21:49:44.764749  392207 cache.go:131] gcr.io/k8s-minikube/kicbase:v0.0.22@sha256:7cc3a3cb6e51c628d8ede157ad9e1f797e8d22a1b3cedc12d3f1999cb52f962e exists in cache, skipping pull
	I0507 21:49:44.807870  392207 preload.go:123] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/preloaded-images-k8s-v10-v1.22.0-alpha.1-containerd-overlay2-amd64.tar.lz4
	I0507 21:49:44.807887  392207 cache.go:54] Caching tarball of preloaded images
	I0507 21:49:44.807908  392207 preload.go:98] Checking if preload exists for k8s version v1.22.0-alpha.1 and runtime containerd
	I0507 21:49:44.847881  392207 preload.go:123] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/preloaded-images-k8s-v10-v1.22.0-alpha.1-containerd-overlay2-amd64.tar.lz4
	I0507 21:49:44.850028  392207 preload.go:196] getting checksum for preloaded-images-k8s-v10-v1.22.0-alpha.1-containerd-overlay2-amd64.tar.lz4 ...
	I0507 21:49:44.911406  392207 download.go:78] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/preloaded-images-k8s-v10-v1.22.0-alpha.1-containerd-overlay2-amd64.tar.lz4?checksum=md5:b27a383b22d0591a90cea87635e51b90 -> /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v10-v1.22.0-alpha.1-containerd-overlay2-amd64.tar.lz4
	I0507 21:49:51.619340  392207 preload.go:206] saving checksum for preloaded-images-k8s-v10-v1.22.0-alpha.1-containerd-overlay2-amd64.tar.lz4 ...
	I0507 21:49:58.548245  392207 preload.go:218] verifying checksumm of /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v10-v1.22.0-alpha.1-containerd-overlay2-amd64.tar.lz4 ...
	
	* 
	* The control plane node "" does not exist.
	  To start a cluster, run: "minikube start -p download-only-20210507214926-391940"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:167: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.22.0-alpha.1/LogsDuration (0.07s)

                                                
                                    
x
+
TestDownloadOnly/DeleteAll (1.66s)

                                                
                                                
=== RUN   TestDownloadOnly/DeleteAll
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-amd64 delete --all
aaa_download_only_test.go:184: (dbg) Done: out/minikube-linux-amd64 delete --all: (1.662944474s)
--- PASS: TestDownloadOnly/DeleteAll (1.66s)

                                                
                                    
x
+
TestDownloadOnly/DeleteAlwaysSucceeds (0.3s)

                                                
                                                
=== RUN   TestDownloadOnly/DeleteAlwaysSucceeds
aaa_download_only_test.go:196: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-20210507214926-391940
--- PASS: TestDownloadOnly/DeleteAlwaysSucceeds (0.30s)

                                                
                                    
x
+
TestDownloadOnlyKic (4.09s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:221: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p download-docker-20210507215004-391940 --force --alsologtostderr --driver=docker  --container-runtime=containerd
aaa_download_only_test.go:221: (dbg) Done: out/minikube-linux-amd64 start --download-only -p download-docker-20210507215004-391940 --force --alsologtostderr --driver=docker  --container-runtime=containerd: (1.884670731s)
helpers_test.go:171: Cleaning up "download-docker-20210507215004-391940" profile ...
helpers_test.go:174: (dbg) Run:  out/minikube-linux-amd64 delete -p download-docker-20210507215004-391940
--- PASS: TestDownloadOnlyKic (4.09s)

                                                
                                    
x
+
TestOffline (169.58s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-linux-amd64 start -p offline-containerd-20210507222034-391940 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=docker  --container-runtime=containerd

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Done: out/minikube-linux-amd64 start -p offline-containerd-20210507222034-391940 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=docker  --container-runtime=containerd: (2m45.853737511s)
helpers_test.go:171: Cleaning up "offline-containerd-20210507222034-391940" profile ...
helpers_test.go:174: (dbg) Run:  out/minikube-linux-amd64 delete -p offline-containerd-20210507222034-391940

                                                
                                                
=== CONT  TestOffline
helpers_test.go:174: (dbg) Done: out/minikube-linux-amd64 delete -p offline-containerd-20210507222034-391940: (3.729093332s)
--- PASS: TestOffline (169.58s)

                                                
                                    
x
+
TestAddons/parallel/Registry (17.29s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:297: registry stabilized in 14.301018ms

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:299: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...

                                                
                                                
=== CONT  TestAddons/parallel/Registry
helpers_test.go:335: "registry-dbwln" [f1184bbf-8eb7-4995-b123-d3a653788fe1] Running

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:299: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 5.014497608s

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:302: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:335: "registry-proxy-qj6bf" [0926320e-2a48-4649-adda-ff0e8a5c4c01] Running

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:302: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.02293997s
addons_test.go:307: (dbg) Run:  kubectl --context addons-20210507215008-391940 delete po -l run=registry-test --now
addons_test.go:312: (dbg) Run:  kubectl --context addons-20210507215008-391940 run --rm registry-test --restart=Never --image=busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:312: (dbg) Done: kubectl --context addons-20210507215008-391940 run --rm registry-test --restart=Never --image=busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (6.608716707s)
addons_test.go:326: (dbg) Run:  out/minikube-linux-amd64 -p addons-20210507215008-391940 ip

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:355: (dbg) Run:  out/minikube-linux-amd64 -p addons-20210507215008-391940 addons disable registry --alsologtostderr -v=1
--- PASS: TestAddons/parallel/Registry (17.29s)

                                                
                                    
x
+
TestAddons/parallel/Ingress (40.37s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:158: (dbg) TestAddons/parallel/Ingress: waiting 12m0s for pods matching "app.kubernetes.io/name=ingress-nginx" in namespace "ingress-nginx" ...
helpers_test.go:335: "ingress-nginx-admission-create-4qmw9" [85fd4547-9fae-40ef-8e05-9ff385e0ed84] Succeeded: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:158: (dbg) TestAddons/parallel/Ingress: app.kubernetes.io/name=ingress-nginx healthy within 54.840652ms
addons_test.go:165: (dbg) Run:  kubectl --context addons-20210507215008-391940 replace --force -f testdata/nginx-ingv1beta.yaml
addons_test.go:170: kubectl --context addons-20210507215008-391940 replace --force -f testdata/nginx-ingv1beta.yaml: unexpected stderr: Warning: networking.k8s.io/v1beta1 Ingress is deprecated in v1.19+, unavailable in v1.22+; use networking.k8s.io/v1 Ingress
(may be temporary)
addons_test.go:180: (dbg) Run:  kubectl --context addons-20210507215008-391940 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:185: (dbg) TestAddons/parallel/Ingress: waiting 4m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:335: "nginx" [bdfee180-2959-4ab0-b32a-b139f54bb97c] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
helpers_test.go:335: "nginx" [bdfee180-2959-4ab0-b32a-b139f54bb97c] Running

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:185: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 10.008174059s
addons_test.go:204: (dbg) Run:  out/minikube-linux-amd64 -p addons-20210507215008-391940 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:230: (dbg) Run:  kubectl --context addons-20210507215008-391940 replace --force -f testdata/nginx-ingv1.yaml
2021/05/07 21:53:34 [DEBUG] GET http://192.168.58.2:5000

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:255: (dbg) Run:  out/minikube-linux-amd64 -p addons-20210507215008-391940 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:278: (dbg) Run:  out/minikube-linux-amd64 -p addons-20210507215008-391940 addons disable ingress --alsologtostderr -v=1

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:278: (dbg) Done: out/minikube-linux-amd64 -p addons-20210507215008-391940 addons disable ingress --alsologtostderr -v=1: (28.690892152s)
--- PASS: TestAddons/parallel/Ingress (40.37s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (5.7s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:374: metrics-server stabilized in 14.661778ms

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:376: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
helpers_test.go:335: "metrics-server-7894db45f8-qf4dh" [4c32eac9-bdf8-4f04-8819-f21b3868130a] Running

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:376: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.014082277s

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:382: (dbg) Run:  kubectl --context addons-20210507215008-391940 top pods -n kube-system

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:399: (dbg) Run:  out/minikube-linux-amd64 -p addons-20210507215008-391940 addons disable metrics-server --alsologtostderr -v=1
--- PASS: TestAddons/parallel/MetricsServer (5.70s)

                                                
                                    
x
+
TestAddons/parallel/HelmTiller (9.39s)

                                                
                                                
=== RUN   TestAddons/parallel/HelmTiller
=== PAUSE TestAddons/parallel/HelmTiller

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:423: tiller-deploy stabilized in 14.86943ms

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:425: (dbg) TestAddons/parallel/HelmTiller: waiting 6m0s for pods matching "app=helm" in namespace "kube-system" ...

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
helpers_test.go:335: "tiller-deploy-7c86b7fbdf-ztctb" [de250d5c-e3a2-4cae-9ae0-2d7522ceceb3] Running

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:425: (dbg) TestAddons/parallel/HelmTiller: app=helm healthy within 5.015240907s
addons_test.go:440: (dbg) Run:  kubectl --context addons-20210507215008-391940 run --rm helm-test --restart=Never --image=alpine/helm:2.16.3 -it --namespace=kube-system --serviceaccount=tiller -- version

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:440: (dbg) Done: kubectl --context addons-20210507215008-391940 run --rm helm-test --restart=Never --image=alpine/helm:2.16.3 -it --namespace=kube-system --serviceaccount=tiller -- version: (3.664716093s)
addons_test.go:445: kubectl --context addons-20210507215008-391940 run --rm helm-test --restart=Never --image=alpine/helm:2.16.3 -it --namespace=kube-system --serviceaccount=tiller -- version: unexpected stderr: Unable to use a TTY - input is not a terminal or the right kind of file
If you don't see a command prompt, try pressing enter.
Error attaching, falling back to logs: 
addons_test.go:457: (dbg) Run:  out/minikube-linux-amd64 -p addons-20210507215008-391940 addons disable helm-tiller --alsologtostderr -v=1
--- PASS: TestAddons/parallel/HelmTiller (9.39s)

                                                
                                    
x
+
TestAddons/parallel/CSI (64.09s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:540: csi-hostpath-driver pods stabilized in 73.649264ms
addons_test.go:543: (dbg) Run:  kubectl --context addons-20210507215008-391940 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:548: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:385: (dbg) Run:  kubectl --context addons-20210507215008-391940 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:553: (dbg) Run:  kubectl --context addons-20210507215008-391940 create -f testdata/csi-hostpath-driver/pv-pod.yaml

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:558: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:335: "task-pv-pod" [761cad11-daa2-499c-8e57-791b77aac221] Pending

                                                
                                                
=== CONT  TestAddons/parallel/CSI
helpers_test.go:335: "task-pv-pod" [761cad11-daa2-499c-8e57-791b77aac221] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])

                                                
                                                
=== CONT  TestAddons/parallel/CSI
helpers_test.go:335: "task-pv-pod" [761cad11-daa2-499c-8e57-791b77aac221] Running
addons_test.go:558: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 18.007914257s
addons_test.go:563: (dbg) Run:  kubectl --context addons-20210507215008-391940 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:568: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:410: (dbg) Run:  kubectl --context addons-20210507215008-391940 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:410: (dbg) Run:  kubectl --context addons-20210507215008-391940 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:573: (dbg) Run:  kubectl --context addons-20210507215008-391940 delete pod task-pv-pod
addons_test.go:573: (dbg) Done: kubectl --context addons-20210507215008-391940 delete pod task-pv-pod: (6.186074486s)
addons_test.go:579: (dbg) Run:  kubectl --context addons-20210507215008-391940 delete pvc hpvc
addons_test.go:585: (dbg) Run:  kubectl --context addons-20210507215008-391940 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:590: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:385: (dbg) Run:  kubectl --context addons-20210507215008-391940 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:595: (dbg) Run:  kubectl --context addons-20210507215008-391940 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:600: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:335: "task-pv-pod-restore" [6ac2d1d2-91d3-4c59-a041-8fbf65c96c76] Pending
helpers_test.go:335: "task-pv-pod-restore" [6ac2d1d2-91d3-4c59-a041-8fbf65c96c76] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])

                                                
                                                
=== CONT  TestAddons/parallel/CSI
helpers_test.go:335: "task-pv-pod-restore" [6ac2d1d2-91d3-4c59-a041-8fbf65c96c76] Running
addons_test.go:600: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 23.006012517s
addons_test.go:605: (dbg) Run:  kubectl --context addons-20210507215008-391940 delete pod task-pv-pod-restore
addons_test.go:605: (dbg) Done: kubectl --context addons-20210507215008-391940 delete pod task-pv-pod-restore: (6.505500128s)
addons_test.go:609: (dbg) Run:  kubectl --context addons-20210507215008-391940 delete pvc hpvc-restore
addons_test.go:613: (dbg) Run:  kubectl --context addons-20210507215008-391940 delete volumesnapshot new-snapshot-demo
addons_test.go:617: (dbg) Run:  out/minikube-linux-amd64 -p addons-20210507215008-391940 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:617: (dbg) Done: out/minikube-linux-amd64 -p addons-20210507215008-391940 addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.725764207s)
addons_test.go:621: (dbg) Run:  out/minikube-linux-amd64 -p addons-20210507215008-391940 addons disable volumesnapshots --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CSI (64.09s)

                                                
                                    
x
+
TestAddons/parallel/GCPAuth (48.9s)

                                                
                                                
=== RUN   TestAddons/parallel/GCPAuth
=== PAUSE TestAddons/parallel/GCPAuth

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/GCPAuth

                                                
                                                
=== CONT  TestAddons/parallel/GCPAuth
addons_test.go:632: (dbg) Run:  kubectl --context addons-20210507215008-391940 create -f testdata/busybox.yaml

                                                
                                                
=== CONT  TestAddons/parallel/GCPAuth
addons_test.go:638: (dbg) TestAddons/parallel/GCPAuth: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:335: "busybox" [eb32c375-34a5-4d1d-bfc8-01ce23948eb4] Pending
helpers_test.go:335: "busybox" [eb32c375-34a5-4d1d-bfc8-01ce23948eb4] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:335: "busybox" [eb32c375-34a5-4d1d-bfc8-01ce23948eb4] Running

                                                
                                                
=== CONT  TestAddons/parallel/GCPAuth
addons_test.go:638: (dbg) TestAddons/parallel/GCPAuth: integration-test=busybox healthy within 8.007132184s
addons_test.go:644: (dbg) Run:  kubectl --context addons-20210507215008-391940 exec busybox -- /bin/sh -c "printenv GOOGLE_APPLICATION_CREDENTIALS"
addons_test.go:681: (dbg) Run:  kubectl --context addons-20210507215008-391940 exec busybox -- /bin/sh -c "printenv GOOGLE_CLOUD_PROJECT"
addons_test.go:697: (dbg) Run:  kubectl --context addons-20210507215008-391940 apply -f testdata/private-image.yaml

                                                
                                                
=== CONT  TestAddons/parallel/GCPAuth
addons_test.go:704: (dbg) TestAddons/parallel/GCPAuth: waiting 8m0s for pods matching "integration-test=private-image" in namespace "default" ...

                                                
                                                
=== CONT  TestAddons/parallel/GCPAuth
helpers_test.go:335: "private-image-7ff9c8c74f-hz2rf" [ced80f0b-49a9-4fd1-8324-e2c2b29c7244] Pending / Ready:ContainersNotReady (containers with unready status: [private-image]) / ContainersReady:ContainersNotReady (containers with unready status: [private-image])

                                                
                                                
=== CONT  TestAddons/parallel/GCPAuth
helpers_test.go:335: "private-image-7ff9c8c74f-hz2rf" [ced80f0b-49a9-4fd1-8324-e2c2b29c7244] Running

                                                
                                                
=== CONT  TestAddons/parallel/GCPAuth
addons_test.go:704: (dbg) TestAddons/parallel/GCPAuth: integration-test=private-image healthy within 13.007543103s
addons_test.go:710: (dbg) Run:  out/minikube-linux-amd64 -p addons-20210507215008-391940 addons disable gcp-auth --alsologtostderr -v=1

                                                
                                                
=== CONT  TestAddons/parallel/GCPAuth
addons_test.go:710: (dbg) Done: out/minikube-linux-amd64 -p addons-20210507215008-391940 addons disable gcp-auth --alsologtostderr -v=1: (26.971779437s)
--- PASS: TestAddons/parallel/GCPAuth (48.90s)

                                                
                                    
x
+
TestCertOptions (48.61s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:47: (dbg) Run:  out/minikube-linux-amd64 start -p cert-options-20210507222144-391940 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=containerd

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:47: (dbg) Done: out/minikube-linux-amd64 start -p cert-options-20210507222144-391940 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=containerd: (45.773058558s)
cert_options_test.go:58: (dbg) Run:  out/minikube-linux-amd64 -p cert-options-20210507222144-391940 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:73: (dbg) Run:  kubectl --context cert-options-20210507222144-391940 config view
helpers_test.go:171: Cleaning up "cert-options-20210507222144-391940" profile ...
helpers_test.go:174: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-options-20210507222144-391940
helpers_test.go:174: (dbg) Done: out/minikube-linux-amd64 delete -p cert-options-20210507222144-391940: (2.510124914s)
--- PASS: TestCertOptions (48.61s)

                                                
                                    
x
+
TestForceSystemdFlag (70.58s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:85: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-flag-20210507222034-391940 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd
docker_test.go:85: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-flag-20210507222034-391940 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd: (1m7.782317233s)
docker_test.go:113: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-flag-20210507222034-391940 ssh "cat /etc/containerd/config.toml"
helpers_test.go:171: Cleaning up "force-systemd-flag-20210507222034-391940" profile ...
helpers_test.go:174: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-flag-20210507222034-391940
helpers_test.go:174: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-flag-20210507222034-391940: (2.520936965s)
--- PASS: TestForceSystemdFlag (70.58s)

                                                
                                    
x
+
TestForceSystemdEnv (48.31s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:136: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-env-20210507222233-391940 --memory=2048 --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:136: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-env-20210507222233-391940 --memory=2048 --alsologtostderr -v=5 --driver=docker  --container-runtime=containerd: (44.735959164s)
docker_test.go:113: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-env-20210507222233-391940 ssh "cat /etc/containerd/config.toml"
helpers_test.go:171: Cleaning up "force-systemd-env-20210507222233-391940" profile ...
helpers_test.go:174: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-env-20210507222233-391940

                                                
                                                
=== CONT  TestForceSystemdEnv
helpers_test.go:174: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-env-20210507222233-391940: (3.183463089s)
--- PASS: TestForceSystemdEnv (48.31s)

                                                
                                    
x
+
TestErrorSpam/start (54.18s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:208: Cleaning up 1 logfile(s) ...
error_spam_test.go:166: (dbg) Run:  out/minikube-linux-amd64 start -p nospam-20210507215455-391940 --log_dir /tmp/nospam-20210507215455-391940 --dry-run
error_spam_test.go:166: (dbg) Run:  out/minikube-linux-amd64 start -p nospam-20210507215455-391940 --log_dir /tmp/nospam-20210507215455-391940 --dry-run
error_spam_test.go:166: (dbg) Run:  out/minikube-linux-amd64 start -p nospam-20210507215455-391940 --log_dir /tmp/nospam-20210507215455-391940 --dry-run
error_spam_test.go:166: (dbg) Run:  out/minikube-linux-amd64 start -p nospam-20210507215455-391940 --log_dir /tmp/nospam-20210507215455-391940 --dry-run
error_spam_test.go:166: (dbg) Run:  out/minikube-linux-amd64 start -p nospam-20210507215455-391940 --log_dir /tmp/nospam-20210507215455-391940 --dry-run
error_spam_test.go:166: (dbg) Run:  out/minikube-linux-amd64 start -p nospam-20210507215455-391940 --log_dir /tmp/nospam-20210507215455-391940 --dry-run
error_spam_test.go:166: (dbg) Run:  out/minikube-linux-amd64 start -p nospam-20210507215455-391940 --log_dir /tmp/nospam-20210507215455-391940 --dry-run
error_spam_test.go:166: (dbg) Run:  out/minikube-linux-amd64 start -p nospam-20210507215455-391940 --log_dir /tmp/nospam-20210507215455-391940 --dry-run
error_spam_test.go:166: (dbg) Run:  out/minikube-linux-amd64 start -p nospam-20210507215455-391940 --log_dir /tmp/nospam-20210507215455-391940 --dry-run
error_spam_test.go:166: (dbg) Run:  out/minikube-linux-amd64 start -p nospam-20210507215455-391940 --log_dir /tmp/nospam-20210507215455-391940 --dry-run
error_spam_test.go:166: (dbg) Run:  out/minikube-linux-amd64 start -p nospam-20210507215455-391940 --log_dir /tmp/nospam-20210507215455-391940 --dry-run
error_spam_test.go:166: (dbg) Run:  out/minikube-linux-amd64 start -p nospam-20210507215455-391940 --log_dir /tmp/nospam-20210507215455-391940 --dry-run
error_spam_test.go:166: (dbg) Run:  out/minikube-linux-amd64 start -p nospam-20210507215455-391940 --log_dir /tmp/nospam-20210507215455-391940 --dry-run
error_spam_test.go:166: (dbg) Run:  out/minikube-linux-amd64 start -p nospam-20210507215455-391940 --log_dir /tmp/nospam-20210507215455-391940 --dry-run
error_spam_test.go:166: (dbg) Run:  out/minikube-linux-amd64 start -p nospam-20210507215455-391940 --log_dir /tmp/nospam-20210507215455-391940 --dry-run
error_spam_test.go:166: (dbg) Run:  out/minikube-linux-amd64 start -p nospam-20210507215455-391940 --log_dir /tmp/nospam-20210507215455-391940 --dry-run
error_spam_test.go:166: (dbg) Run:  out/minikube-linux-amd64 start -p nospam-20210507215455-391940 --log_dir /tmp/nospam-20210507215455-391940 --dry-run
error_spam_test.go:166: (dbg) Run:  out/minikube-linux-amd64 start -p nospam-20210507215455-391940 --log_dir /tmp/nospam-20210507215455-391940 --dry-run
error_spam_test.go:166: (dbg) Run:  out/minikube-linux-amd64 start -p nospam-20210507215455-391940 --log_dir /tmp/nospam-20210507215455-391940 --dry-run
error_spam_test.go:166: (dbg) Run:  out/minikube-linux-amd64 start -p nospam-20210507215455-391940 --log_dir /tmp/nospam-20210507215455-391940 --dry-run
error_spam_test.go:166: (dbg) Run:  out/minikube-linux-amd64 start -p nospam-20210507215455-391940 --log_dir /tmp/nospam-20210507215455-391940 --dry-run
error_spam_test.go:166: (dbg) Run:  out/minikube-linux-amd64 start -p nospam-20210507215455-391940 --log_dir /tmp/nospam-20210507215455-391940 --dry-run
error_spam_test.go:166: (dbg) Run:  out/minikube-linux-amd64 start -p nospam-20210507215455-391940 --log_dir /tmp/nospam-20210507215455-391940 --dry-run
error_spam_test.go:166: (dbg) Run:  out/minikube-linux-amd64 start -p nospam-20210507215455-391940 --log_dir /tmp/nospam-20210507215455-391940 --dry-run
error_spam_test.go:166: (dbg) Run:  out/minikube-linux-amd64 start -p nospam-20210507215455-391940 --log_dir /tmp/nospam-20210507215455-391940 --dry-run
error_spam_test.go:166: (dbg) Run:  out/minikube-linux-amd64 start -p nospam-20210507215455-391940 --log_dir /tmp/nospam-20210507215455-391940 --dry-run
error_spam_test.go:166: (dbg) Run:  out/minikube-linux-amd64 start -p nospam-20210507215455-391940 --log_dir /tmp/nospam-20210507215455-391940 --dry-run
error_spam_test.go:166: (dbg) Run:  out/minikube-linux-amd64 start -p nospam-20210507215455-391940 --log_dir /tmp/nospam-20210507215455-391940 --dry-run
error_spam_test.go:166: (dbg) Run:  out/minikube-linux-amd64 start -p nospam-20210507215455-391940 --log_dir /tmp/nospam-20210507215455-391940 --dry-run
error_spam_test.go:166: (dbg) Run:  out/minikube-linux-amd64 start -p nospam-20210507215455-391940 --log_dir /tmp/nospam-20210507215455-391940 --dry-run
error_spam_test.go:166: (dbg) Run:  out/minikube-linux-amd64 start -p nospam-20210507215455-391940 --log_dir /tmp/nospam-20210507215455-391940 --dry-run
error_spam_test.go:166: (dbg) Run:  out/minikube-linux-amd64 start -p nospam-20210507215455-391940 --log_dir /tmp/nospam-20210507215455-391940 --dry-run
error_spam_test.go:166: (dbg) Run:  out/minikube-linux-amd64 start -p nospam-20210507215455-391940 --log_dir /tmp/nospam-20210507215455-391940 --dry-run
error_spam_test.go:166: (dbg) Run:  out/minikube-linux-amd64 start -p nospam-20210507215455-391940 --log_dir /tmp/nospam-20210507215455-391940 --dry-run
error_spam_test.go:166: (dbg) Run:  out/minikube-linux-amd64 start -p nospam-20210507215455-391940 --log_dir /tmp/nospam-20210507215455-391940 --dry-run
error_spam_test.go:166: (dbg) Run:  out/minikube-linux-amd64 start -p nospam-20210507215455-391940 --log_dir /tmp/nospam-20210507215455-391940 --dry-run
error_spam_test.go:166: (dbg) Run:  out/minikube-linux-amd64 start -p nospam-20210507215455-391940 --log_dir /tmp/nospam-20210507215455-391940 --dry-run
error_spam_test.go:166: (dbg) Run:  out/minikube-linux-amd64 start -p nospam-20210507215455-391940 --log_dir /tmp/nospam-20210507215455-391940 --dry-run
error_spam_test.go:166: (dbg) Run:  out/minikube-linux-amd64 start -p nospam-20210507215455-391940 --log_dir /tmp/nospam-20210507215455-391940 --dry-run
error_spam_test.go:166: (dbg) Run:  out/minikube-linux-amd64 start -p nospam-20210507215455-391940 --log_dir /tmp/nospam-20210507215455-391940 --dry-run
error_spam_test.go:166: (dbg) Run:  out/minikube-linux-amd64 start -p nospam-20210507215455-391940 --log_dir /tmp/nospam-20210507215455-391940 --dry-run
error_spam_test.go:166: (dbg) Run:  out/minikube-linux-amd64 start -p nospam-20210507215455-391940 --log_dir /tmp/nospam-20210507215455-391940 --dry-run
error_spam_test.go:166: (dbg) Run:  out/minikube-linux-amd64 start -p nospam-20210507215455-391940 --log_dir /tmp/nospam-20210507215455-391940 --dry-run
error_spam_test.go:166: (dbg) Run:  out/minikube-linux-amd64 start -p nospam-20210507215455-391940 --log_dir /tmp/nospam-20210507215455-391940 --dry-run
error_spam_test.go:166: (dbg) Run:  out/minikube-linux-amd64 start -p nospam-20210507215455-391940 --log_dir /tmp/nospam-20210507215455-391940 --dry-run
error_spam_test.go:166: (dbg) Run:  out/minikube-linux-amd64 start -p nospam-20210507215455-391940 --log_dir /tmp/nospam-20210507215455-391940 --dry-run
error_spam_test.go:166: (dbg) Run:  out/minikube-linux-amd64 start -p nospam-20210507215455-391940 --log_dir /tmp/nospam-20210507215455-391940 --dry-run
error_spam_test.go:166: (dbg) Run:  out/minikube-linux-amd64 start -p nospam-20210507215455-391940 --log_dir /tmp/nospam-20210507215455-391940 --dry-run
error_spam_test.go:166: (dbg) Run:  out/minikube-linux-amd64 start -p nospam-20210507215455-391940 --log_dir /tmp/nospam-20210507215455-391940 --dry-run
error_spam_test.go:166: (dbg) Run:  out/minikube-linux-amd64 start -p nospam-20210507215455-391940 --log_dir /tmp/nospam-20210507215455-391940 --dry-run
error_spam_test.go:166: (dbg) Run:  out/minikube-linux-amd64 start -p nospam-20210507215455-391940 --log_dir /tmp/nospam-20210507215455-391940 --dry-run
error_spam_test.go:166: (dbg) Run:  out/minikube-linux-amd64 start -p nospam-20210507215455-391940 --log_dir /tmp/nospam-20210507215455-391940 --dry-run
error_spam_test.go:166: (dbg) Run:  out/minikube-linux-amd64 start -p nospam-20210507215455-391940 --log_dir /tmp/nospam-20210507215455-391940 --dry-run
error_spam_test.go:166: (dbg) Run:  out/minikube-linux-amd64 start -p nospam-20210507215455-391940 --log_dir /tmp/nospam-20210507215455-391940 --dry-run
error_spam_test.go:166: (dbg) Run:  out/minikube-linux-amd64 start -p nospam-20210507215455-391940 --log_dir /tmp/nospam-20210507215455-391940 --dry-run
error_spam_test.go:166: (dbg) Run:  out/minikube-linux-amd64 start -p nospam-20210507215455-391940 --log_dir /tmp/nospam-20210507215455-391940 --dry-run
error_spam_test.go:166: (dbg) Run:  out/minikube-linux-amd64 start -p nospam-20210507215455-391940 --log_dir /tmp/nospam-20210507215455-391940 --dry-run
error_spam_test.go:166: (dbg) Run:  out/minikube-linux-amd64 start -p nospam-20210507215455-391940 --log_dir /tmp/nospam-20210507215455-391940 --dry-run
error_spam_test.go:166: (dbg) Run:  out/minikube-linux-amd64 start -p nospam-20210507215455-391940 --log_dir /tmp/nospam-20210507215455-391940 --dry-run
error_spam_test.go:166: (dbg) Run:  out/minikube-linux-amd64 start -p nospam-20210507215455-391940 --log_dir /tmp/nospam-20210507215455-391940 --dry-run
error_spam_test.go:166: (dbg) Run:  out/minikube-linux-amd64 start -p nospam-20210507215455-391940 --log_dir /tmp/nospam-20210507215455-391940 --dry-run
error_spam_test.go:166: (dbg) Run:  out/minikube-linux-amd64 start -p nospam-20210507215455-391940 --log_dir /tmp/nospam-20210507215455-391940 --dry-run
error_spam_test.go:166: (dbg) Run:  out/minikube-linux-amd64 start -p nospam-20210507215455-391940 --log_dir /tmp/nospam-20210507215455-391940 --dry-run
error_spam_test.go:166: (dbg) Run:  out/minikube-linux-amd64 start -p nospam-20210507215455-391940 --log_dir /tmp/nospam-20210507215455-391940 --dry-run
error_spam_test.go:166: (dbg) Run:  out/minikube-linux-amd64 start -p nospam-20210507215455-391940 --log_dir /tmp/nospam-20210507215455-391940 --dry-run
error_spam_test.go:166: (dbg) Run:  out/minikube-linux-amd64 start -p nospam-20210507215455-391940 --log_dir /tmp/nospam-20210507215455-391940 --dry-run
error_spam_test.go:166: (dbg) Run:  out/minikube-linux-amd64 start -p nospam-20210507215455-391940 --log_dir /tmp/nospam-20210507215455-391940 --dry-run
error_spam_test.go:166: (dbg) Run:  out/minikube-linux-amd64 start -p nospam-20210507215455-391940 --log_dir /tmp/nospam-20210507215455-391940 --dry-run
error_spam_test.go:166: (dbg) Run:  out/minikube-linux-amd64 start -p nospam-20210507215455-391940 --log_dir /tmp/nospam-20210507215455-391940 --dry-run
error_spam_test.go:166: (dbg) Run:  out/minikube-linux-amd64 start -p nospam-20210507215455-391940 --log_dir /tmp/nospam-20210507215455-391940 --dry-run
error_spam_test.go:166: (dbg) Run:  out/minikube-linux-amd64 start -p nospam-20210507215455-391940 --log_dir /tmp/nospam-20210507215455-391940 --dry-run
error_spam_test.go:166: (dbg) Run:  out/minikube-linux-amd64 start -p nospam-20210507215455-391940 --log_dir /tmp/nospam-20210507215455-391940 --dry-run
error_spam_test.go:166: (dbg) Run:  out/minikube-linux-amd64 start -p nospam-20210507215455-391940 --log_dir /tmp/nospam-20210507215455-391940 --dry-run
error_spam_test.go:166: (dbg) Run:  out/minikube-linux-amd64 start -p nospam-20210507215455-391940 --log_dir /tmp/nospam-20210507215455-391940 --dry-run
error_spam_test.go:166: (dbg) Run:  out/minikube-linux-amd64 start -p nospam-20210507215455-391940 --log_dir /tmp/nospam-20210507215455-391940 --dry-run
error_spam_test.go:166: (dbg) Run:  out/minikube-linux-amd64 start -p nospam-20210507215455-391940 --log_dir /tmp/nospam-20210507215455-391940 --dry-run
error_spam_test.go:166: (dbg) Run:  out/minikube-linux-amd64 start -p nospam-20210507215455-391940 --log_dir /tmp/nospam-20210507215455-391940 --dry-run
error_spam_test.go:166: (dbg) Run:  out/minikube-linux-amd64 start -p nospam-20210507215455-391940 --log_dir /tmp/nospam-20210507215455-391940 --dry-run
error_spam_test.go:166: (dbg) Run:  out/minikube-linux-amd64 start -p nospam-20210507215455-391940 --log_dir /tmp/nospam-20210507215455-391940 --dry-run
error_spam_test.go:166: (dbg) Run:  out/minikube-linux-amd64 start -p nospam-20210507215455-391940 --log_dir /tmp/nospam-20210507215455-391940 --dry-run
error_spam_test.go:166: (dbg) Run:  out/minikube-linux-amd64 start -p nospam-20210507215455-391940 --log_dir /tmp/nospam-20210507215455-391940 --dry-run
error_spam_test.go:166: (dbg) Run:  out/minikube-linux-amd64 start -p nospam-20210507215455-391940 --log_dir /tmp/nospam-20210507215455-391940 --dry-run
error_spam_test.go:166: (dbg) Run:  out/minikube-linux-amd64 start -p nospam-20210507215455-391940 --log_dir /tmp/nospam-20210507215455-391940 --dry-run
error_spam_test.go:166: (dbg) Run:  out/minikube-linux-amd64 start -p nospam-20210507215455-391940 --log_dir /tmp/nospam-20210507215455-391940 --dry-run
error_spam_test.go:166: (dbg) Run:  out/minikube-linux-amd64 start -p nospam-20210507215455-391940 --log_dir /tmp/nospam-20210507215455-391940 --dry-run
error_spam_test.go:166: (dbg) Run:  out/minikube-linux-amd64 start -p nospam-20210507215455-391940 --log_dir /tmp/nospam-20210507215455-391940 --dry-run
error_spam_test.go:166: (dbg) Run:  out/minikube-linux-amd64 start -p nospam-20210507215455-391940 --log_dir /tmp/nospam-20210507215455-391940 --dry-run
error_spam_test.go:166: (dbg) Run:  out/minikube-linux-amd64 start -p nospam-20210507215455-391940 --log_dir /tmp/nospam-20210507215455-391940 --dry-run
error_spam_test.go:166: (dbg) Run:  out/minikube-linux-amd64 start -p nospam-20210507215455-391940 --log_dir /tmp/nospam-20210507215455-391940 --dry-run
error_spam_test.go:166: (dbg) Run:  out/minikube-linux-amd64 start -p nospam-20210507215455-391940 --log_dir /tmp/nospam-20210507215455-391940 --dry-run
error_spam_test.go:166: (dbg) Run:  out/minikube-linux-amd64 start -p nospam-20210507215455-391940 --log_dir /tmp/nospam-20210507215455-391940 --dry-run
error_spam_test.go:166: (dbg) Run:  out/minikube-linux-amd64 start -p nospam-20210507215455-391940 --log_dir /tmp/nospam-20210507215455-391940 --dry-run
error_spam_test.go:166: (dbg) Run:  out/minikube-linux-amd64 start -p nospam-20210507215455-391940 --log_dir /tmp/nospam-20210507215455-391940 --dry-run
error_spam_test.go:166: (dbg) Run:  out/minikube-linux-amd64 start -p nospam-20210507215455-391940 --log_dir /tmp/nospam-20210507215455-391940 --dry-run
error_spam_test.go:166: (dbg) Run:  out/minikube-linux-amd64 start -p nospam-20210507215455-391940 --log_dir /tmp/nospam-20210507215455-391940 --dry-run
error_spam_test.go:166: (dbg) Run:  out/minikube-linux-amd64 start -p nospam-20210507215455-391940 --log_dir /tmp/nospam-20210507215455-391940 --dry-run
error_spam_test.go:166: (dbg) Run:  out/minikube-linux-amd64 start -p nospam-20210507215455-391940 --log_dir /tmp/nospam-20210507215455-391940 --dry-run
error_spam_test.go:166: (dbg) Run:  out/minikube-linux-amd64 start -p nospam-20210507215455-391940 --log_dir /tmp/nospam-20210507215455-391940 --dry-run
error_spam_test.go:166: (dbg) Run:  out/minikube-linux-amd64 start -p nospam-20210507215455-391940 --log_dir /tmp/nospam-20210507215455-391940 --dry-run
error_spam_test.go:166: (dbg) Run:  out/minikube-linux-amd64 start -p nospam-20210507215455-391940 --log_dir /tmp/nospam-20210507215455-391940 --dry-run
error_spam_test.go:166: (dbg) Run:  out/minikube-linux-amd64 start -p nospam-20210507215455-391940 --log_dir /tmp/nospam-20210507215455-391940 --dry-run
error_spam_test.go:166: (dbg) Run:  out/minikube-linux-amd64 start -p nospam-20210507215455-391940 --log_dir /tmp/nospam-20210507215455-391940 --dry-run
error_spam_test.go:166: (dbg) Run:  out/minikube-linux-amd64 start -p nospam-20210507215455-391940 --log_dir /tmp/nospam-20210507215455-391940 --dry-run
error_spam_test.go:166: (dbg) Run:  out/minikube-linux-amd64 start -p nospam-20210507215455-391940 --log_dir /tmp/nospam-20210507215455-391940 --dry-run
error_spam_test.go:166: (dbg) Run:  out/minikube-linux-amd64 start -p nospam-20210507215455-391940 --log_dir /tmp/nospam-20210507215455-391940 --dry-run
error_spam_test.go:166: (dbg) Run:  out/minikube-linux-amd64 start -p nospam-20210507215455-391940 --log_dir /tmp/nospam-20210507215455-391940 --dry-run
error_spam_test.go:166: (dbg) Run:  out/minikube-linux-amd64 start -p nospam-20210507215455-391940 --log_dir /tmp/nospam-20210507215455-391940 --dry-run
error_spam_test.go:166: (dbg) Run:  out/minikube-linux-amd64 start -p nospam-20210507215455-391940 --log_dir /tmp/nospam-20210507215455-391940 --dry-run
error_spam_test.go:166: (dbg) Run:  out/minikube-linux-amd64 start -p nospam-20210507215455-391940 --log_dir /tmp/nospam-20210507215455-391940 --dry-run
error_spam_test.go:166: (dbg) Run:  out/minikube-linux-amd64 start -p nospam-20210507215455-391940 --log_dir /tmp/nospam-20210507215455-391940 --dry-run
error_spam_test.go:166: (dbg) Run:  out/minikube-linux-amd64 start -p nospam-20210507215455-391940 --log_dir /tmp/nospam-20210507215455-391940 --dry-run
error_spam_test.go:166: (dbg) Run:  out/minikube-linux-amd64 start -p nospam-20210507215455-391940 --log_dir /tmp/nospam-20210507215455-391940 --dry-run
error_spam_test.go:166: (dbg) Run:  out/minikube-linux-amd64 start -p nospam-20210507215455-391940 --log_dir /tmp/nospam-20210507215455-391940 --dry-run
error_spam_test.go:166: (dbg) Run:  out/minikube-linux-amd64 start -p nospam-20210507215455-391940 --log_dir /tmp/nospam-20210507215455-391940 --dry-run
error_spam_test.go:166: (dbg) Run:  out/minikube-linux-amd64 start -p nospam-20210507215455-391940 --log_dir /tmp/nospam-20210507215455-391940 --dry-run
error_spam_test.go:166: (dbg) Run:  out/minikube-linux-amd64 start -p nospam-20210507215455-391940 --log_dir /tmp/nospam-20210507215455-391940 --dry-run
error_spam_test.go:166: (dbg) Run:  out/minikube-linux-amd64 start -p nospam-20210507215455-391940 --log_dir /tmp/nospam-20210507215455-391940 --dry-run
error_spam_test.go:166: (dbg) Run:  out/minikube-linux-amd64 start -p nospam-20210507215455-391940 --log_dir /tmp/nospam-20210507215455-391940 --dry-run
error_spam_test.go:166: (dbg) Run:  out/minikube-linux-amd64 start -p nospam-20210507215455-391940 --log_dir /tmp/nospam-20210507215455-391940 --dry-run
error_spam_test.go:166: (dbg) Run:  out/minikube-linux-amd64 start -p nospam-20210507215455-391940 --log_dir /tmp/nospam-20210507215455-391940 --dry-run
error_spam_test.go:166: (dbg) Run:  out/minikube-linux-amd64 start -p nospam-20210507215455-391940 --log_dir /tmp/nospam-20210507215455-391940 --dry-run
error_spam_test.go:166: (dbg) Run:  out/minikube-linux-amd64 start -p nospam-20210507215455-391940 --log_dir /tmp/nospam-20210507215455-391940 --dry-run
error_spam_test.go:166: (dbg) Run:  out/minikube-linux-amd64 start -p nospam-20210507215455-391940 --log_dir /tmp/nospam-20210507215455-391940 --dry-run
error_spam_test.go:166: (dbg) Run:  out/minikube-linux-amd64 start -p nospam-20210507215455-391940 --log_dir /tmp/nospam-20210507215455-391940 --dry-run
error_spam_test.go:166: (dbg) Run:  out/minikube-linux-amd64 start -p nospam-20210507215455-391940 --log_dir /tmp/nospam-20210507215455-391940 --dry-run
error_spam_test.go:166: (dbg) Run:  out/minikube-linux-amd64 start -p nospam-20210507215455-391940 --log_dir /tmp/nospam-20210507215455-391940 --dry-run
error_spam_test.go:166: (dbg) Run:  out/minikube-linux-amd64 start -p nospam-20210507215455-391940 --log_dir /tmp/nospam-20210507215455-391940 --dry-run
error_spam_test.go:166: (dbg) Run:  out/minikube-linux-amd64 start -p nospam-20210507215455-391940 --log_dir /tmp/nospam-20210507215455-391940 --dry-run
error_spam_test.go:166: (dbg) Run:  out/minikube-linux-amd64 start -p nospam-20210507215455-391940 --log_dir /tmp/nospam-20210507215455-391940 --dry-run
error_spam_test.go:166: (dbg) Run:  out/minikube-linux-amd64 start -p nospam-20210507215455-391940 --log_dir /tmp/nospam-20210507215455-391940 --dry-run
error_spam_test.go:166: (dbg) Run:  out/minikube-linux-amd64 start -p nospam-20210507215455-391940 --log_dir /tmp/nospam-20210507215455-391940 --dry-run
error_spam_test.go:166: (dbg) Run:  out/minikube-linux-amd64 start -p nospam-20210507215455-391940 --log_dir /tmp/nospam-20210507215455-391940 --dry-run
error_spam_test.go:166: (dbg) Run:  out/minikube-linux-amd64 start -p nospam-20210507215455-391940 --log_dir /tmp/nospam-20210507215455-391940 --dry-run
error_spam_test.go:166: (dbg) Run:  out/minikube-linux-amd64 start -p nospam-20210507215455-391940 --log_dir /tmp/nospam-20210507215455-391940 --dry-run
error_spam_test.go:166: (dbg) Run:  out/minikube-linux-amd64 start -p nospam-20210507215455-391940 --log_dir /tmp/nospam-20210507215455-391940 --dry-run
error_spam_test.go:166: (dbg) Run:  out/minikube-linux-amd64 start -p nospam-20210507215455-391940 --log_dir /tmp/nospam-20210507215455-391940 --dry-run
error_spam_test.go:166: (dbg) Run:  out/minikube-linux-amd64 start -p nospam-20210507215455-391940 --log_dir /tmp/nospam-20210507215455-391940 --dry-run
error_spam_test.go:166: (dbg) Run:  out/minikube-linux-amd64 start -p nospam-20210507215455-391940 --log_dir /tmp/nospam-20210507215455-391940 --dry-run
error_spam_test.go:166: (dbg) Run:  out/minikube-linux-amd64 start -p nospam-20210507215455-391940 --log_dir /tmp/nospam-20210507215455-391940 --dry-run
error_spam_test.go:166: (dbg) Run:  out/minikube-linux-amd64 start -p nospam-20210507215455-391940 --log_dir /tmp/nospam-20210507215455-391940 --dry-run
error_spam_test.go:166: (dbg) Run:  out/minikube-linux-amd64 start -p nospam-20210507215455-391940 --log_dir /tmp/nospam-20210507215455-391940 --dry-run
error_spam_test.go:166: (dbg) Run:  out/minikube-linux-amd64 start -p nospam-20210507215455-391940 --log_dir /tmp/nospam-20210507215455-391940 --dry-run
error_spam_test.go:166: (dbg) Run:  out/minikube-linux-amd64 start -p nospam-20210507215455-391940 --log_dir /tmp/nospam-20210507215455-391940 --dry-run
error_spam_test.go:166: (dbg) Run:  out/minikube-linux-amd64 start -p nospam-20210507215455-391940 --log_dir /tmp/nospam-20210507215455-391940 --dry-run
error_spam_test.go:166: (dbg) Run:  out/minikube-linux-amd64 start -p nospam-20210507215455-391940 --log_dir /tmp/nospam-20210507215455-391940 --dry-run
error_spam_test.go:166: (dbg) Run:  out/minikube-linux-amd64 start -p nospam-20210507215455-391940 --log_dir /tmp/nospam-20210507215455-391940 --dry-run
error_spam_test.go:166: (dbg) Run:  out/minikube-linux-amd64 start -p nospam-20210507215455-391940 --log_dir /tmp/nospam-20210507215455-391940 --dry-run
error_spam_test.go:166: (dbg) Run:  out/minikube-linux-amd64 start -p nospam-20210507215455-391940 --log_dir /tmp/nospam-20210507215455-391940 --dry-run
error_spam_test.go:166: (dbg) Run:  out/minikube-linux-amd64 start -p nospam-20210507215455-391940 --log_dir /tmp/nospam-20210507215455-391940 --dry-run
error_spam_test.go:166: (dbg) Run:  out/minikube-linux-amd64 start -p nospam-20210507215455-391940 --log_dir /tmp/nospam-20210507215455-391940 --dry-run
error_spam_test.go:166: (dbg) Run:  out/minikube-linux-amd64 start -p nospam-20210507215455-391940 --log_dir /tmp/nospam-20210507215455-391940 --dry-run
error_spam_test.go:166: (dbg) Run:  out/minikube-linux-amd64 start -p nospam-20210507215455-391940 --log_dir /tmp/nospam-20210507215455-391940 --dry-run
error_spam_test.go:166: (dbg) Run:  out/minikube-linux-amd64 start -p nospam-20210507215455-391940 --log_dir /tmp/nospam-20210507215455-391940 --dry-run
error_spam_test.go:166: (dbg) Run:  out/minikube-linux-amd64 start -p nospam-20210507215455-391940 --log_dir /tmp/nospam-20210507215455-391940 --dry-run
error_spam_test.go:166: (dbg) Run:  out/minikube-linux-amd64 start -p nospam-20210507215455-391940 --log_dir /tmp/nospam-20210507215455-391940 --dry-run
error_spam_test.go:166: (dbg) Run:  out/minikube-linux-amd64 start -p nospam-20210507215455-391940 --log_dir /tmp/nospam-20210507215455-391940 --dry-run
error_spam_test.go:166: (dbg) Run:  out/minikube-linux-amd64 start -p nospam-20210507215455-391940 --log_dir /tmp/nospam-20210507215455-391940 --dry-run
error_spam_test.go:166: (dbg) Run:  out/minikube-linux-amd64 start -p nospam-20210507215455-391940 --log_dir /tmp/nospam-20210507215455-391940 --dry-run
error_spam_test.go:166: (dbg) Run:  out/minikube-linux-amd64 start -p nospam-20210507215455-391940 --log_dir /tmp/nospam-20210507215455-391940 --dry-run
error_spam_test.go:166: (dbg) Run:  out/minikube-linux-amd64 start -p nospam-20210507215455-391940 --log_dir /tmp/nospam-20210507215455-391940 --dry-run
error_spam_test.go:166: (dbg) Run:  out/minikube-linux-amd64 start -p nospam-20210507215455-391940 --log_dir /tmp/nospam-20210507215455-391940 --dry-run
error_spam_test.go:166: (dbg) Run:  out/minikube-linux-amd64 start -p nospam-20210507215455-391940 --log_dir /tmp/nospam-20210507215455-391940 --dry-run
error_spam_test.go:166: (dbg) Run:  out/minikube-linux-amd64 start -p nospam-20210507215455-391940 --log_dir /tmp/nospam-20210507215455-391940 --dry-run
error_spam_test.go:166: (dbg) Run:  out/minikube-linux-amd64 start -p nospam-20210507215455-391940 --log_dir /tmp/nospam-20210507215455-391940 --dry-run
error_spam_test.go:166: (dbg) Run:  out/minikube-linux-amd64 start -p nospam-20210507215455-391940 --log_dir /tmp/nospam-20210507215455-391940 --dry-run
error_spam_test.go:166: (dbg) Run:  out/minikube-linux-amd64 start -p nospam-20210507215455-391940 --log_dir /tmp/nospam-20210507215455-391940 --dry-run
error_spam_test.go:166: (dbg) Run:  out/minikube-linux-amd64 start -p nospam-20210507215455-391940 --log_dir /tmp/nospam-20210507215455-391940 --dry-run
error_spam_test.go:166: (dbg) Run:  out/minikube-linux-amd64 start -p nospam-20210507215455-391940 --log_dir /tmp/nospam-20210507215455-391940 --dry-run
error_spam_test.go:166: (dbg) Run:  out/minikube-linux-amd64 start -p nospam-20210507215455-391940 --log_dir /tmp/nospam-20210507215455-391940 --dry-run
error_spam_test.go:166: (dbg) Run:  out/minikube-linux-amd64 start -p nospam-20210507215455-391940 --log_dir /tmp/nospam-20210507215455-391940 --dry-run
error_spam_test.go:166: (dbg) Run:  out/minikube-linux-amd64 start -p nospam-20210507215455-391940 --log_dir /tmp/nospam-20210507215455-391940 --dry-run
error_spam_test.go:166: (dbg) Run:  out/minikube-linux-amd64 start -p nospam-20210507215455-391940 --log_dir /tmp/nospam-20210507215455-391940 --dry-run
error_spam_test.go:166: (dbg) Run:  out/minikube-linux-amd64 start -p nospam-20210507215455-391940 --log_dir /tmp/nospam-20210507215455-391940 --dry-run
error_spam_test.go:166: (dbg) Run:  out/minikube-linux-amd64 start -p nospam-20210507215455-391940 --log_dir /tmp/nospam-20210507215455-391940 --dry-run
error_spam_test.go:166: (dbg) Run:  out/minikube-linux-amd64 start -p nospam-20210507215455-391940 --log_dir /tmp/nospam-20210507215455-391940 --dry-run
--- PASS: TestErrorSpam/start (54.18s)

                                                
                                    
x
+
TestErrorSpam/status (30.62s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:208: Cleaning up 0 logfile(s) ...
error_spam_test.go:166: (dbg) Run:  out/minikube-linux-amd64 status -p nospam-20210507215455-391940 --log_dir /tmp/nospam-20210507215455-391940
error_spam_test.go:166: (dbg) Run:  out/minikube-linux-amd64 status -p nospam-20210507215455-391940 --log_dir /tmp/nospam-20210507215455-391940
error_spam_test.go:166: (dbg) Run:  out/minikube-linux-amd64 status -p nospam-20210507215455-391940 --log_dir /tmp/nospam-20210507215455-391940
error_spam_test.go:166: (dbg) Run:  out/minikube-linux-amd64 status -p nospam-20210507215455-391940 --log_dir /tmp/nospam-20210507215455-391940
error_spam_test.go:166: (dbg) Run:  out/minikube-linux-amd64 status -p nospam-20210507215455-391940 --log_dir /tmp/nospam-20210507215455-391940
error_spam_test.go:166: (dbg) Run:  out/minikube-linux-amd64 status -p nospam-20210507215455-391940 --log_dir /tmp/nospam-20210507215455-391940
error_spam_test.go:166: (dbg) Run:  out/minikube-linux-amd64 status -p nospam-20210507215455-391940 --log_dir /tmp/nospam-20210507215455-391940
error_spam_test.go:166: (dbg) Run:  out/minikube-linux-amd64 status -p nospam-20210507215455-391940 --log_dir /tmp/nospam-20210507215455-391940
error_spam_test.go:166: (dbg) Run:  out/minikube-linux-amd64 status -p nospam-20210507215455-391940 --log_dir /tmp/nospam-20210507215455-391940
error_spam_test.go:166: (dbg) Run:  out/minikube-linux-amd64 status -p nospam-20210507215455-391940 --log_dir /tmp/nospam-20210507215455-391940
error_spam_test.go:166: (dbg) Run:  out/minikube-linux-amd64 status -p nospam-20210507215455-391940 --log_dir /tmp/nospam-20210507215455-391940
error_spam_test.go:166: (dbg) Run:  out/minikube-linux-amd64 status -p nospam-20210507215455-391940 --log_dir /tmp/nospam-20210507215455-391940
error_spam_test.go:166: (dbg) Run:  out/minikube-linux-amd64 status -p nospam-20210507215455-391940 --log_dir /tmp/nospam-20210507215455-391940
error_spam_test.go:166: (dbg) Run:  out/minikube-linux-amd64 status -p nospam-20210507215455-391940 --log_dir /tmp/nospam-20210507215455-391940
error_spam_test.go:166: (dbg) Run:  out/minikube-linux-amd64 status -p nospam-20210507215455-391940 --log_dir /tmp/nospam-20210507215455-391940
error_spam_test.go:166: (dbg) Run:  out/minikube-linux-amd64 status -p nospam-20210507215455-391940 --log_dir /tmp/nospam-20210507215455-391940
error_spam_test.go:166: (dbg) Run:  out/minikube-linux-amd64 status -p nospam-20210507215455-391940 --log_dir /tmp/nospam-20210507215455-391940
error_spam_test.go:166: (dbg) Run:  out/minikube-linux-amd64 status -p nospam-20210507215455-391940 --log_dir /tmp/nospam-20210507215455-391940
error_spam_test.go:166: (dbg) Run:  out/minikube-linux-amd64 status -p nospam-20210507215455-391940 --log_dir /tmp/nospam-20210507215455-391940
error_spam_test.go:166: (dbg) Run:  out/minikube-linux-amd64 status -p nospam-20210507215455-391940 --log_dir /tmp/nospam-20210507215455-391940
error_spam_test.go:166: (dbg) Run:  out/minikube-linux-amd64 status -p nospam-20210507215455-391940 --log_dir /tmp/nospam-20210507215455-391940
error_spam_test.go:166: (dbg) Run:  out/minikube-linux-amd64 status -p nospam-20210507215455-391940 --log_dir /tmp/nospam-20210507215455-391940
error_spam_test.go:166: (dbg) Run:  out/minikube-linux-amd64 status -p nospam-20210507215455-391940 --log_dir /tmp/nospam-20210507215455-391940
error_spam_test.go:166: (dbg) Run:  out/minikube-linux-amd64 status -p nospam-20210507215455-391940 --log_dir /tmp/nospam-20210507215455-391940
error_spam_test.go:166: (dbg) Run:  out/minikube-linux-amd64 status -p nospam-20210507215455-391940 --log_dir /tmp/nospam-20210507215455-391940
error_spam_test.go:166: (dbg) Run:  out/minikube-linux-amd64 status -p nospam-20210507215455-391940 --log_dir /tmp/nospam-20210507215455-391940
error_spam_test.go:166: (dbg) Run:  out/minikube-linux-amd64 status -p nospam-20210507215455-391940 --log_dir /tmp/nospam-20210507215455-391940
error_spam_test.go:166: (dbg) Run:  out/minikube-linux-amd64 status -p nospam-20210507215455-391940 --log_dir /tmp/nospam-20210507215455-391940
error_spam_test.go:166: (dbg) Run:  out/minikube-linux-amd64 status -p nospam-20210507215455-391940 --log_dir /tmp/nospam-20210507215455-391940
error_spam_test.go:166: (dbg) Run:  out/minikube-linux-amd64 status -p nospam-20210507215455-391940 --log_dir /tmp/nospam-20210507215455-391940
error_spam_test.go:166: (dbg) Run:  out/minikube-linux-amd64 status -p nospam-20210507215455-391940 --log_dir /tmp/nospam-20210507215455-391940
error_spam_test.go:166: (dbg) Run:  out/minikube-linux-amd64 status -p nospam-20210507215455-391940 --log_dir /tmp/nospam-20210507215455-391940
error_spam_test.go:166: (dbg) Run:  out/minikube-linux-amd64 status -p nospam-20210507215455-391940 --log_dir /tmp/nospam-20210507215455-391940
error_spam_test.go:166: (dbg) Run:  out/minikube-linux-amd64 status -p nospam-20210507215455-391940 --log_dir /tmp/nospam-20210507215455-391940
error_spam_test.go:166: (dbg) Run:  out/minikube-linux-amd64 status -p nospam-20210507215455-391940 --log_dir /tmp/nospam-20210507215455-391940
error_spam_test.go:166: (dbg) Run:  out/minikube-linux-amd64 status -p nospam-20210507215455-391940 --log_dir /tmp/nospam-20210507215455-391940
error_spam_test.go:166: (dbg) Run:  out/minikube-linux-amd64 status -p nospam-20210507215455-391940 --log_dir /tmp/nospam-20210507215455-391940
error_spam_test.go:166: (dbg) Run:  out/minikube-linux-amd64 status -p nospam-20210507215455-391940 --log_dir /tmp/nospam-20210507215455-391940
error_spam_test.go:166: (dbg) Run:  out/minikube-linux-amd64 status -p nospam-20210507215455-391940 --log_dir /tmp/nospam-20210507215455-391940
error_spam_test.go:166: (dbg) Run:  out/minikube-linux-amd64 status -p nospam-20210507215455-391940 --log_dir /tmp/nospam-20210507215455-391940
error_spam_test.go:166: (dbg) Run:  out/minikube-linux-amd64 status -p nospam-20210507215455-391940 --log_dir /tmp/nospam-20210507215455-391940
error_spam_test.go:166: (dbg) Run:  out/minikube-linux-amd64 status -p nospam-20210507215455-391940 --log_dir /tmp/nospam-20210507215455-391940
error_spam_test.go:166: (dbg) Run:  out/minikube-linux-amd64 status -p nospam-20210507215455-391940 --log_dir /tmp/nospam-20210507215455-391940
error_spam_test.go:166: (dbg) Run:  out/minikube-linux-amd64 status -p nospam-20210507215455-391940 --log_dir /tmp/nospam-20210507215455-391940
error_spam_test.go:166: (dbg) Run:  out/minikube-linux-amd64 status -p nospam-20210507215455-391940 --log_dir /tmp/nospam-20210507215455-391940
error_spam_test.go:166: (dbg) Run:  out/minikube-linux-amd64 status -p nospam-20210507215455-391940 --log_dir /tmp/nospam-20210507215455-391940
error_spam_test.go:166: (dbg) Run:  out/minikube-linux-amd64 status -p nospam-20210507215455-391940 --log_dir /tmp/nospam-20210507215455-391940
error_spam_test.go:166: (dbg) Run:  out/minikube-linux-amd64 status -p nospam-20210507215455-391940 --log_dir /tmp/nospam-20210507215455-391940
error_spam_test.go:166: (dbg) Run:  out/minikube-linux-amd64 status -p nospam-20210507215455-391940 --log_dir /tmp/nospam-20210507215455-391940
error_spam_test.go:166: (dbg) Run:  out/minikube-linux-amd64 status -p nospam-20210507215455-391940 --log_dir /tmp/nospam-20210507215455-391940
error_spam_test.go:166: (dbg) Run:  out/minikube-linux-amd64 status -p nospam-20210507215455-391940 --log_dir /tmp/nospam-20210507215455-391940
error_spam_test.go:166: (dbg) Run:  out/minikube-linux-amd64 status -p nospam-20210507215455-391940 --log_dir /tmp/nospam-20210507215455-391940
error_spam_test.go:166: (dbg) Run:  out/minikube-linux-amd64 status -p nospam-20210507215455-391940 --log_dir /tmp/nospam-20210507215455-391940
error_spam_test.go:166: (dbg) Run:  out/minikube-linux-amd64 status -p nospam-20210507215455-391940 --log_dir /tmp/nospam-20210507215455-391940
error_spam_test.go:166: (dbg) Run:  out/minikube-linux-amd64 status -p nospam-20210507215455-391940 --log_dir /tmp/nospam-20210507215455-391940
error_spam_test.go:166: (dbg) Run:  out/minikube-linux-amd64 status -p nospam-20210507215455-391940 --log_dir /tmp/nospam-20210507215455-391940
error_spam_test.go:166: (dbg) Run:  out/minikube-linux-amd64 status -p nospam-20210507215455-391940 --log_dir /tmp/nospam-20210507215455-391940
error_spam_test.go:166: (dbg) Run:  out/minikube-linux-amd64 status -p nospam-20210507215455-391940 --log_dir /tmp/nospam-20210507215455-391940
error_spam_test.go:166: (dbg) Run:  out/minikube-linux-amd64 status -p nospam-20210507215455-391940 --log_dir /tmp/nospam-20210507215455-391940
error_spam_test.go:166: (dbg) Run:  out/minikube-linux-amd64 status -p nospam-20210507215455-391940 --log_dir /tmp/nospam-20210507215455-391940
error_spam_test.go:166: (dbg) Run:  out/minikube-linux-amd64 status -p nospam-20210507215455-391940 --log_dir /tmp/nospam-20210507215455-391940
error_spam_test.go:166: (dbg) Run:  out/minikube-linux-amd64 status -p nospam-20210507215455-391940 --log_dir /tmp/nospam-20210507215455-391940
error_spam_test.go:166: (dbg) Run:  out/minikube-linux-amd64 status -p nospam-20210507215455-391940 --log_dir /tmp/nospam-20210507215455-391940
error_spam_test.go:166: (dbg) Run:  out/minikube-linux-amd64 status -p nospam-20210507215455-391940 --log_dir /tmp/nospam-20210507215455-391940
error_spam_test.go:166: (dbg) Run:  out/minikube-linux-amd64 status -p nospam-20210507215455-391940 --log_dir /tmp/nospam-20210507215455-391940
error_spam_test.go:166: (dbg) Run:  out/minikube-linux-amd64 status -p nospam-20210507215455-391940 --log_dir /tmp/nospam-20210507215455-391940
error_spam_test.go:166: (dbg) Run:  out/minikube-linux-amd64 status -p nospam-20210507215455-391940 --log_dir /tmp/nospam-20210507215455-391940
error_spam_test.go:166: (dbg) Run:  out/minikube-linux-amd64 status -p nospam-20210507215455-391940 --log_dir /tmp/nospam-20210507215455-391940
error_spam_test.go:166: (dbg) Run:  out/minikube-linux-amd64 status -p nospam-20210507215455-391940 --log_dir /tmp/nospam-20210507215455-391940
error_spam_test.go:166: (dbg) Run:  out/minikube-linux-amd64 status -p nospam-20210507215455-391940 --log_dir /tmp/nospam-20210507215455-391940
error_spam_test.go:166: (dbg) Run:  out/minikube-linux-amd64 status -p nospam-20210507215455-391940 --log_dir /tmp/nospam-20210507215455-391940
error_spam_test.go:166: (dbg) Run:  out/minikube-linux-amd64 status -p nospam-20210507215455-391940 --log_dir /tmp/nospam-20210507215455-391940
error_spam_test.go:166: (dbg) Run:  out/minikube-linux-amd64 status -p nospam-20210507215455-391940 --log_dir /tmp/nospam-20210507215455-391940
error_spam_test.go:166: (dbg) Run:  out/minikube-linux-amd64 status -p nospam-20210507215455-391940 --log_dir /tmp/nospam-20210507215455-391940
error_spam_test.go:166: (dbg) Run:  out/minikube-linux-amd64 status -p nospam-20210507215455-391940 --log_dir /tmp/nospam-20210507215455-391940
error_spam_test.go:166: (dbg) Run:  out/minikube-linux-amd64 status -p nospam-20210507215455-391940 --log_dir /tmp/nospam-20210507215455-391940
error_spam_test.go:166: (dbg) Run:  out/minikube-linux-amd64 status -p nospam-20210507215455-391940 --log_dir /tmp/nospam-20210507215455-391940
error_spam_test.go:166: (dbg) Run:  out/minikube-linux-amd64 status -p nospam-20210507215455-391940 --log_dir /tmp/nospam-20210507215455-391940
error_spam_test.go:166: (dbg) Run:  out/minikube-linux-amd64 status -p nospam-20210507215455-391940 --log_dir /tmp/nospam-20210507215455-391940
error_spam_test.go:166: (dbg) Run:  out/minikube-linux-amd64 status -p nospam-20210507215455-391940 --log_dir /tmp/nospam-20210507215455-391940
error_spam_test.go:166: (dbg) Run:  out/minikube-linux-amd64 status -p nospam-20210507215455-391940 --log_dir /tmp/nospam-20210507215455-391940
error_spam_test.go:166: (dbg) Run:  out/minikube-linux-amd64 status -p nospam-20210507215455-391940 --log_dir /tmp/nospam-20210507215455-391940
error_spam_test.go:166: (dbg) Run:  out/minikube-linux-amd64 status -p nospam-20210507215455-391940 --log_dir /tmp/nospam-20210507215455-391940
error_spam_test.go:166: (dbg) Run:  out/minikube-linux-amd64 status -p nospam-20210507215455-391940 --log_dir /tmp/nospam-20210507215455-391940
error_spam_test.go:166: (dbg) Run:  out/minikube-linux-amd64 status -p nospam-20210507215455-391940 --log_dir /tmp/nospam-20210507215455-391940
error_spam_test.go:166: (dbg) Run:  out/minikube-linux-amd64 status -p nospam-20210507215455-391940 --log_dir /tmp/nospam-20210507215455-391940
error_spam_test.go:166: (dbg) Run:  out/minikube-linux-amd64 status -p nospam-20210507215455-391940 --log_dir /tmp/nospam-20210507215455-391940
error_spam_test.go:166: (dbg) Run:  out/minikube-linux-amd64 status -p nospam-20210507215455-391940 --log_dir /tmp/nospam-20210507215455-391940
error_spam_test.go:166: (dbg) Run:  out/minikube-linux-amd64 status -p nospam-20210507215455-391940 --log_dir /tmp/nospam-20210507215455-391940
error_spam_test.go:166: (dbg) Run:  out/minikube-linux-amd64 status -p nospam-20210507215455-391940 --log_dir /tmp/nospam-20210507215455-391940
error_spam_test.go:166: (dbg) Run:  out/minikube-linux-amd64 status -p nospam-20210507215455-391940 --log_dir /tmp/nospam-20210507215455-391940
error_spam_test.go:166: (dbg) Run:  out/minikube-linux-amd64 status -p nospam-20210507215455-391940 --log_dir /tmp/nospam-20210507215455-391940
error_spam_test.go:166: (dbg) Run:  out/minikube-linux-amd64 status -p nospam-20210507215455-391940 --log_dir /tmp/nospam-20210507215455-391940
error_spam_test.go:166: (dbg) Run:  out/minikube-linux-amd64 status -p nospam-20210507215455-391940 --log_dir /tmp/nospam-20210507215455-391940
error_spam_test.go:166: (dbg) Run:  out/minikube-linux-amd64 status -p nospam-20210507215455-391940 --log_dir /tmp/nospam-20210507215455-391940
error_spam_test.go:166: (dbg) Run:  out/minikube-linux-amd64 status -p nospam-20210507215455-391940 --log_dir /tmp/nospam-20210507215455-391940
error_spam_test.go:166: (dbg) Run:  out/minikube-linux-amd64 status -p nospam-20210507215455-391940 --log_dir /tmp/nospam-20210507215455-391940
error_spam_test.go:166: (dbg) Run:  out/minikube-linux-amd64 status -p nospam-20210507215455-391940 --log_dir /tmp/nospam-20210507215455-391940
error_spam_test.go:166: (dbg) Run:  out/minikube-linux-amd64 status -p nospam-20210507215455-391940 --log_dir /tmp/nospam-20210507215455-391940
error_spam_test.go:166: (dbg) Run:  out/minikube-linux-amd64 status -p nospam-20210507215455-391940 --log_dir /tmp/nospam-20210507215455-391940
--- PASS: TestErrorSpam/status (30.62s)

                                                
                                    
x
+
TestErrorSpam/pause (2.18s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:208: Cleaning up 0 logfile(s) ...
error_spam_test.go:166: (dbg) Run:  out/minikube-linux-amd64 pause -p nospam-20210507215455-391940 --log_dir /tmp/nospam-20210507215455-391940
error_spam_test.go:166: (dbg) Run:  out/minikube-linux-amd64 pause -p nospam-20210507215455-391940 --log_dir /tmp/nospam-20210507215455-391940
error_spam_test.go:166: (dbg) Run:  out/minikube-linux-amd64 pause -p nospam-20210507215455-391940 --log_dir /tmp/nospam-20210507215455-391940
error_spam_test.go:166: (dbg) Run:  out/minikube-linux-amd64 pause -p nospam-20210507215455-391940 --log_dir /tmp/nospam-20210507215455-391940
error_spam_test.go:166: (dbg) Run:  out/minikube-linux-amd64 pause -p nospam-20210507215455-391940 --log_dir /tmp/nospam-20210507215455-391940
--- PASS: TestErrorSpam/pause (2.18s)

                                                
                                    
x
+
TestErrorSpam/unpause (0.53s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:208: Cleaning up 0 logfile(s) ...
error_spam_test.go:166: (dbg) Run:  out/minikube-linux-amd64 unpause -p nospam-20210507215455-391940 --log_dir /tmp/nospam-20210507215455-391940
--- PASS: TestErrorSpam/unpause (0.53s)

                                                
                                    
x
+
TestErrorSpam/stop (20.71s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:208: Cleaning up 0 logfile(s) ...
error_spam_test.go:166: (dbg) Run:  out/minikube-linux-amd64 stop -p nospam-20210507215455-391940 --log_dir /tmp/nospam-20210507215455-391940
error_spam_test.go:166: (dbg) Done: out/minikube-linux-amd64 stop -p nospam-20210507215455-391940 --log_dir /tmp/nospam-20210507215455-391940: (20.70894383s)
--- PASS: TestErrorSpam/stop (20.71s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1546: local sync path: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/files/etc/test/nested/copy/391940/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (134.51s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:541: (dbg) Run:  out/minikube-linux-amd64 start -p functional-20210507215728-391940 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=containerd
E0507 21:58:17.776896  391940 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/profiles/addons-20210507215008-391940/client.crt: no such file or directory
E0507 21:58:17.782547  391940 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/profiles/addons-20210507215008-391940/client.crt: no such file or directory
E0507 21:58:17.792773  391940 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/profiles/addons-20210507215008-391940/client.crt: no such file or directory
E0507 21:58:17.812966  391940 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/profiles/addons-20210507215008-391940/client.crt: no such file or directory
E0507 21:58:17.853184  391940 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/profiles/addons-20210507215008-391940/client.crt: no such file or directory
E0507 21:58:17.933426  391940 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/profiles/addons-20210507215008-391940/client.crt: no such file or directory
E0507 21:58:18.093787  391940 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/profiles/addons-20210507215008-391940/client.crt: no such file or directory
E0507 21:58:18.414349  391940 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/profiles/addons-20210507215008-391940/client.crt: no such file or directory
E0507 21:58:19.055248  391940 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/profiles/addons-20210507215008-391940/client.crt: no such file or directory
E0507 21:58:20.335526  391940 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/profiles/addons-20210507215008-391940/client.crt: no such file or directory
E0507 21:58:22.896134  391940 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/profiles/addons-20210507215008-391940/client.crt: no such file or directory
E0507 21:58:28.016615  391940 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/profiles/addons-20210507215008-391940/client.crt: no such file or directory
E0507 21:58:38.257204  391940 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/profiles/addons-20210507215008-391940/client.crt: no such file or directory
E0507 21:58:58.738134  391940 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/profiles/addons-20210507215008-391940/client.crt: no such file or directory
E0507 21:59:39.699085  391940 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/profiles/addons-20210507215008-391940/client.crt: no such file or directory
functional_test.go:541: (dbg) Done: out/minikube-linux-amd64 start -p functional-20210507215728-391940 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=containerd: (2m14.513229618s)
--- PASS: TestFunctional/serial/StartWithProxy (134.51s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (15.3s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
functional_test.go:585: (dbg) Run:  out/minikube-linux-amd64 start -p functional-20210507215728-391940 --alsologtostderr -v=8
functional_test.go:585: (dbg) Done: out/minikube-linux-amd64 start -p functional-20210507215728-391940 --alsologtostderr -v=8: (15.299207302s)
functional_test.go:589: soft start took 15.299847795s for "functional-20210507215728-391940" cluster.
--- PASS: TestFunctional/serial/SoftStart (15.30s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:605: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.04s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.2s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:618: (dbg) Run:  kubectl --context functional-20210507215728-391940 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.20s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (3.13s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:910: (dbg) Run:  out/minikube-linux-amd64 -p functional-20210507215728-391940 cache add k8s.gcr.io/pause:3.1
functional_test.go:910: (dbg) Run:  out/minikube-linux-amd64 -p functional-20210507215728-391940 cache add k8s.gcr.io/pause:3.3
functional_test.go:910: (dbg) Done: out/minikube-linux-amd64 -p functional-20210507215728-391940 cache add k8s.gcr.io/pause:3.3: (1.278513679s)
functional_test.go:910: (dbg) Run:  out/minikube-linux-amd64 -p functional-20210507215728-391940 cache add k8s.gcr.io/pause:latest
functional_test.go:910: (dbg) Done: out/minikube-linux-amd64 -p functional-20210507215728-391940 cache add k8s.gcr.io/pause:latest: (1.15376468s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (3.13s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.1s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:940: (dbg) Run:  docker build -t minikube-local-cache-test:functional-20210507215728-391940 /tmp/functional-20210507215728-391940244790430
functional_test.go:945: (dbg) Run:  out/minikube-linux-amd64 -p functional-20210507215728-391940 cache add minikube-local-cache-test:functional-20210507215728-391940
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.10s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete_k8s.gcr.io/pause:3.3 (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete_k8s.gcr.io/pause:3.3
functional_test.go:952: (dbg) Run:  out/minikube-linux-amd64 cache delete k8s.gcr.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/delete_k8s.gcr.io/pause:3.3 (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:959: (dbg) Run:  out/minikube-linux-amd64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.28s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:972: (dbg) Run:  out/minikube-linux-amd64 -p functional-20210507215728-391940 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.28s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (1.99s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:994: (dbg) Run:  out/minikube-linux-amd64 -p functional-20210507215728-391940 ssh sudo crictl rmi k8s.gcr.io/pause:latest
functional_test.go:1000: (dbg) Run:  out/minikube-linux-amd64 -p functional-20210507215728-391940 ssh sudo crictl inspecti k8s.gcr.io/pause:latest
functional_test.go:1000: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-20210507215728-391940 ssh sudo crictl inspecti k8s.gcr.io/pause:latest: exit status 1 (280.796465ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "k8s.gcr.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1005: (dbg) Run:  out/minikube-linux-amd64 -p functional-20210507215728-391940 cache reload
functional_test.go:1005: (dbg) Done: out/minikube-linux-amd64 -p functional-20210507215728-391940 cache reload: (1.150629636s)
functional_test.go:1010: (dbg) Run:  out/minikube-linux-amd64 -p functional-20210507215728-391940 ssh sudo crictl inspecti k8s.gcr.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (1.99s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.12s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1019: (dbg) Run:  out/minikube-linux-amd64 cache delete k8s.gcr.io/pause:3.1
functional_test.go:1019: (dbg) Run:  out/minikube-linux-amd64 cache delete k8s.gcr.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.12s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.12s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:636: (dbg) Run:  out/minikube-linux-amd64 -p functional-20210507215728-391940 kubectl -- --context functional-20210507215728-391940 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.12s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.12s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:655: (dbg) Run:  out/kubectl --context functional-20210507215728-391940 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.12s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (114.08s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:669: (dbg) Run:  out/minikube-linux-amd64 start -p functional-20210507215728-391940 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
E0507 22:01:01.620129  391940 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/profiles/addons-20210507215008-391940/client.crt: no such file or directory
functional_test.go:669: (dbg) Done: out/minikube-linux-amd64 start -p functional-20210507215728-391940 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (1m54.074744132s)
functional_test.go:673: restart took 1m54.074873214s for "functional-20210507215728-391940" cluster.
--- PASS: TestFunctional/serial/ExtraConfig (114.08s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:720: (dbg) Run:  kubectl --context functional-20210507215728-391940 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:734: etcd phase: Running
functional_test.go:744: etcd status: Ready
functional_test.go:734: kube-apiserver phase: Running
functional_test.go:744: kube-apiserver status: Ready
functional_test.go:734: kube-controller-manager phase: Running
functional_test.go:744: kube-controller-manager status: Ready
functional_test.go:734: kube-scheduler phase: Running
functional_test.go:744: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.45s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1045: (dbg) Run:  out/minikube-linux-amd64 -p functional-20210507215728-391940 config unset cpus

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1045: (dbg) Run:  out/minikube-linux-amd64 -p functional-20210507215728-391940 config get cpus
functional_test.go:1045: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-20210507215728-391940 config get cpus: exit status 14 (83.519459ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1045: (dbg) Run:  out/minikube-linux-amd64 -p functional-20210507215728-391940 config set cpus 2

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1045: (dbg) Run:  out/minikube-linux-amd64 -p functional-20210507215728-391940 config get cpus
functional_test.go:1045: (dbg) Run:  out/minikube-linux-amd64 -p functional-20210507215728-391940 config unset cpus

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1045: (dbg) Run:  out/minikube-linux-amd64 -p functional-20210507215728-391940 config get cpus
functional_test.go:1045: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-20210507215728-391940 config get cpus: exit status 14 (66.100682ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.45s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (5.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:811: (dbg) daemon: [out/minikube-linux-amd64 dashboard --url -p functional-20210507215728-391940 --alsologtostderr -v=1]

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:816: (dbg) stopping [out/minikube-linux-amd64 dashboard --url -p functional-20210507215728-391940 --alsologtostderr -v=1] ...
helpers_test.go:499: unable to kill pid 452623: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (5.28s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.61s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:873: (dbg) Run:  out/minikube-linux-amd64 start -p functional-20210507215728-391940 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:873: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-20210507215728-391940 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=containerd: exit status 23 (270.163765ms)

                                                
                                                
-- stdout --
	* [functional-20210507215728-391940] minikube v1.20.0 on Debian 9.13 (kvm/amd64)
	  - KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/kubeconfig
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube
	  - MINIKUBE_LOCATION=master
	* Using the docker driver based on existing profile
	  - More information: https://docs.docker.com/engine/install/linux-postinstall/#your-kernel-does-not-support-cgroup-swap-limit-capabilities
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0507 22:02:04.111923  451424 out.go:291] Setting OutFile to fd 1 ...
	I0507 22:02:04.112094  451424 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0507 22:02:04.112105  451424 out.go:304] Setting ErrFile to fd 2...
	I0507 22:02:04.112110  451424 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0507 22:02:04.112216  451424 root.go:316] Updating PATH: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/bin
	I0507 22:02:04.112461  451424 out.go:298] Setting JSON to false
	I0507 22:02:04.148036  451424 start.go:108] hostinfo: {"hostname":"debian-jenkins-agent-11","uptime":9692,"bootTime":1620415232,"procs":236,"os":"linux","platform":"debian","platformFamily":"debian","platformVersion":"9.13","kernelVersion":"4.9.0-15-amd64","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"c29e0b88-ef83-6765-d2fa-208fdce1af32"}
	I0507 22:02:04.148166  451424 start.go:118] virtualization: kvm guest
	I0507 22:02:04.151155  451424 out.go:170] * [functional-20210507215728-391940] minikube v1.20.0 on Debian 9.13 (kvm/amd64)
	I0507 22:02:04.152992  451424 out.go:170]   - KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/kubeconfig
	I0507 22:02:04.154611  451424 out.go:170]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0507 22:02:04.155979  451424 out.go:170]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube
	I0507 22:02:04.157568  451424 out.go:170]   - MINIKUBE_LOCATION=master
	I0507 22:02:04.158329  451424 driver.go:322] Setting default libvirt URI to qemu:///system
	I0507 22:02:04.207268  451424 docker.go:119] docker version: linux-19.03.15
	I0507 22:02:04.207344  451424 cli_runner.go:115] Run: docker system info --format "{{json .}}"
	I0507 22:02:04.306685  451424 info.go:261] docker info: {ID:LQL6:IQEY:TAE3:NG47:ROZQ:WA5O:XM2B:XDCN:3VXZ:7JF3:4DHA:WN5N Containers:2 ContainersRunning:2 ContainersPaused:0 ContainersStopped:0 Images:131 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:false KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:41 OomKillDisable:true NGoroutines:55 SystemTime:2021-05-07 22:02:04.247849903 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:4.9.0-15-amd64 OperatingSystem:Debian GNU/Linux 9 (stretch) OSType:linux Architecture:x86_64 IndexServerAddr
ess:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33742209024 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:debian-jenkins-agent-11 Labels:[] ExperimentalBuild:false ServerVersion:19.03.15 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:269548fa27e0089a8b8278fc4fc781d7f65a939b Expected:269548fa27e0089a8b8278fc4fc781d7f65a939b} RuncCommit:{ID:ff819c7e9184c13b7c2607fe6c30ae19403a7aff Expected:ff819c7e9184c13b7c2607fe6c30ae19403a7aff} InitCommit:{ID:fec3683 Expected:fec3683} SecurityOptions:[name=seccomp,profile=default] ProductLicense: Warni
ngs:[WARNING: No swap limit support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[] Warnings:<nil>}}
	I0507 22:02:04.306803  451424 docker.go:225] overlay module found
	I0507 22:02:04.309816  451424 out.go:170] * Using the docker driver based on existing profile
	I0507 22:02:04.309861  451424 start.go:276] selected driver: docker
	I0507 22:02:04.309869  451424 start.go:718] validating driver "docker" against &{Name:functional-20210507215728-391940 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.22@sha256:7cc3a3cb6e51c628d8ede157ad9e1f797e8d22a1b3cedc12d3f1999cb52f962e Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.2 ClusterName:functional-20210507215728-391940 Namespace:default APIServerName:minikubeCA APISe
rverNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:containerd CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision} {Component:kubelet Key:cni-conf-dir Value:/etc/cni/net.mk}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.58.2 Port:8441 KubernetesVersion:v1.20.2 ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false helm-tiller:false ingress:false ingress-dns:false istio:false istio-provisioner:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false registry:false registry-aliases:false regis
try-creds:false storage-provisioner:true storage-provisioner-gluster:false volumesnapshots:false] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false}
	I0507 22:02:04.310039  451424 start.go:729] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc:}
	W0507 22:02:04.310089  451424 oci.go:119] Your kernel does not support memory limit capabilities or the cgroup is not mounted.
	W0507 22:02:04.310104  451424 out.go:424] no arguments passed for "! Your cgroup does not allow setting memory.\n" - returning raw string
	W0507 22:02:04.310126  451424 out.go:235] ! Your cgroup does not allow setting memory.
	! Your cgroup does not allow setting memory.
	W0507 22:02:04.310139  451424 out.go:424] no arguments passed for "  - More information: https://docs.docker.com/engine/install/linux-postinstall/#your-kernel-does-not-support-cgroup-swap-limit-capabilities\n" - returning raw string
	I0507 22:02:04.311661  451424 out.go:170]   - More information: https://docs.docker.com/engine/install/linux-postinstall/#your-kernel-does-not-support-cgroup-swap-limit-capabilities
	I0507 22:02:04.313954  451424 out.go:170] 
	W0507 22:02:04.314122  451424 out.go:235] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I0507 22:02:04.315560  451424 out.go:170] 

                                                
                                                
** /stderr **
functional_test.go:888: (dbg) Run:  out/minikube-linux-amd64 start -p functional-20210507215728-391940 --dry-run --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
--- PASS: TestFunctional/parallel/DryRun (0.61s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (1.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:763: (dbg) Run:  out/minikube-linux-amd64 -p functional-20210507215728-391940 status

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:769: (dbg) Run:  out/minikube-linux-amd64 -p functional-20210507215728-391940 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:780: (dbg) Run:  out/minikube-linux-amd64 -p functional-20210507215728-391940 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (1.01s)

                                                
                                    
x
+
TestFunctional/parallel/LogsCmd (2.02s)

                                                
                                                
=== RUN   TestFunctional/parallel/LogsCmd
=== PAUSE TestFunctional/parallel/LogsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/LogsCmd
functional_test.go:1081: (dbg) Run:  out/minikube-linux-amd64 -p functional-20210507215728-391940 logs

                                                
                                                
=== CONT  TestFunctional/parallel/LogsCmd
functional_test.go:1081: (dbg) Done: out/minikube-linux-amd64 -p functional-20210507215728-391940 logs: (2.022718064s)
--- PASS: TestFunctional/parallel/LogsCmd (2.02s)

                                                
                                    
x
+
TestFunctional/parallel/LogsFileCmd (2.02s)

                                                
                                                
=== RUN   TestFunctional/parallel/LogsFileCmd
=== PAUSE TestFunctional/parallel/LogsFileCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/LogsFileCmd
functional_test.go:1097: (dbg) Run:  out/minikube-linux-amd64 -p functional-20210507215728-391940 logs --file /tmp/functional-20210507215728-391940612532608/logs.txt

                                                
                                                
=== CONT  TestFunctional/parallel/LogsFileCmd
functional_test.go:1097: (dbg) Done: out/minikube-linux-amd64 -p functional-20210507215728-391940 logs --file /tmp/functional-20210507215728-391940612532608/logs.txt: (2.019413839s)
--- PASS: TestFunctional/parallel/LogsFileCmd (2.02s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd (6.88s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd
=== PAUSE TestFunctional/parallel/MountCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MountCmd
functional_test_mount_test.go:77: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-20210507215728-391940 /tmp/mounttest300063589:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:111: wrote "test-1620424921468213970" to /tmp/mounttest300063589/created-by-test
functional_test_mount_test.go:111: wrote "test-1620424921468213970" to /tmp/mounttest300063589/created-by-test-removed-by-pod
functional_test_mount_test.go:111: wrote "test-1620424921468213970" to /tmp/mounttest300063589/test-1620424921468213970
functional_test_mount_test.go:119: (dbg) Run:  out/minikube-linux-amd64 -p functional-20210507215728-391940 ssh "findmnt -T /mount-9p | grep 9p"

                                                
                                                
=== CONT  TestFunctional/parallel/MountCmd
functional_test_mount_test.go:119: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-20210507215728-391940 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (372.633689ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **

                                                
                                                
=== CONT  TestFunctional/parallel/MountCmd
functional_test_mount_test.go:119: (dbg) Run:  out/minikube-linux-amd64 -p functional-20210507215728-391940 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:133: (dbg) Run:  out/minikube-linux-amd64 -p functional-20210507215728-391940 ssh -- ls -la /mount-9p
functional_test_mount_test.go:137: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 May  7 22:02 created-by-test
-rw-r--r-- 1 docker docker 24 May  7 22:02 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 May  7 22:02 test-1620424921468213970
functional_test_mount_test.go:141: (dbg) Run:  out/minikube-linux-amd64 -p functional-20210507215728-391940 ssh cat /mount-9p/test-1620424921468213970
functional_test_mount_test.go:152: (dbg) Run:  kubectl --context functional-20210507215728-391940 replace --force -f testdata/busybox-mount-test.yaml

                                                
                                                
=== CONT  TestFunctional/parallel/MountCmd
functional_test_mount_test.go:157: (dbg) TestFunctional/parallel/MountCmd: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:335: "busybox-mount" [12d9fac7-0cf8-42ff-b7e0-88ab86029f5a] Pending

                                                
                                                
=== CONT  TestFunctional/parallel/MountCmd
helpers_test.go:335: "busybox-mount" [12d9fac7-0cf8-42ff-b7e0-88ab86029f5a] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])

                                                
                                                
=== CONT  TestFunctional/parallel/MountCmd
helpers_test.go:335: "busybox-mount" [12d9fac7-0cf8-42ff-b7e0-88ab86029f5a] Succeeded: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:157: (dbg) TestFunctional/parallel/MountCmd: integration-test=busybox-mount healthy within 3.066271303s
functional_test_mount_test.go:173: (dbg) Run:  kubectl --context functional-20210507215728-391940 logs busybox-mount
functional_test_mount_test.go:185: (dbg) Run:  out/minikube-linux-amd64 -p functional-20210507215728-391940 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:185: (dbg) Run:  out/minikube-linux-amd64 -p functional-20210507215728-391940 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:94: (dbg) Run:  out/minikube-linux-amd64 -p functional-20210507215728-391940 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:98: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-20210507215728-391940 /tmp/mounttest300063589:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd (6.88s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd (12.62s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd
=== PAUSE TestFunctional/parallel/ServiceCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmd

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmd
functional_test.go:1273: (dbg) Run:  kubectl --context functional-20210507215728-391940 create deployment hello-node --image=k8s.gcr.io/echoserver:1.8
functional_test.go:1279: (dbg) Run:  kubectl --context functional-20210507215728-391940 expose deployment hello-node --type=NodePort --port=8080

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmd
functional_test.go:1284: (dbg) TestFunctional/parallel/ServiceCmd: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:335: "hello-node-6cbfcd7cbc-5ltsz" [c67a1e00-05b0-4f49-acb2-68c0888df8cf] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmd
helpers_test.go:335: "hello-node-6cbfcd7cbc-5ltsz" [c67a1e00-05b0-4f49-acb2-68c0888df8cf] Running

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmd
functional_test.go:1284: (dbg) TestFunctional/parallel/ServiceCmd: app=hello-node healthy within 10.021748113s
functional_test.go:1288: (dbg) Run:  out/minikube-linux-amd64 -p functional-20210507215728-391940 service list
functional_test.go:1288: (dbg) Done: out/minikube-linux-amd64 -p functional-20210507215728-391940 service list: (1.003927824s)
functional_test.go:1301: (dbg) Run:  out/minikube-linux-amd64 -p functional-20210507215728-391940 service --namespace=default --https --url hello-node
functional_test.go:1310: found endpoint: https://192.168.58.2:30383
functional_test.go:1321: (dbg) Run:  out/minikube-linux-amd64 -p functional-20210507215728-391940 service hello-node --url --format={{.IP}}
2021/05/07 22:02:11 [DEBUG] GET http://127.0.0.1:33357/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmd
functional_test.go:1330: (dbg) Run:  out/minikube-linux-amd64 -p functional-20210507215728-391940 service hello-node --url

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmd
functional_test.go:1336: found endpoint for hello-node: http://192.168.58.2:30383
functional_test.go:1347: Attempting to fetch http://192.168.58.2:30383 ...
functional_test.go:1366: http://192.168.58.2:30383: success! body:

                                                
                                                

                                                
                                                
Hostname: hello-node-6cbfcd7cbc-5ltsz

                                                
                                                
Pod Information:
	-no pod information available-

                                                
                                                
Server values:
	server_version=nginx: 1.13.3 - lua: 10008

                                                
                                                
Request Information:
	client_address=10.244.0.1
	method=GET
	real path=/
	query=
	request_version=1.1
	request_uri=http://192.168.58.2:8080/

                                                
                                                
Request Headers:
	accept-encoding=gzip
	host=192.168.58.2:30383
	user-agent=Go-http-client/1.1

                                                
                                                
Request Body:
	-no body in request-

                                                
                                                
--- PASS: TestFunctional/parallel/ServiceCmd (12.62s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.19s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1381: (dbg) Run:  out/minikube-linux-amd64 -p functional-20210507215728-391940 addons list
functional_test.go:1392: (dbg) Run:  out/minikube-linux-amd64 -p functional-20210507215728-391940 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.19s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (35.71s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:335: "storage-provisioner" [e122fa6e-319e-4a3c-a844-8a52874079c3] Running

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 5.009267528s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-20210507215728-391940 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-20210507215728-391940 apply -f testdata/storage-provisioner/pvc.yaml

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-20210507215728-391940 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-20210507215728-391940 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:335: "sp-pod" [f60e60b9-4a44-491a-a388-c4c14b2f05bc] Pending

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
helpers_test.go:335: "sp-pod" [f60e60b9-4a44-491a-a388-c4c14b2f05bc] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
helpers_test.go:335: "sp-pod" [f60e60b9-4a44-491a-a388-c4c14b2f05bc] Running

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 9.00636817s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context functional-20210507215728-391940 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-20210507215728-391940 delete -f testdata/storage-provisioner/pod.yaml

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:106: (dbg) Done: kubectl --context functional-20210507215728-391940 delete -f testdata/storage-provisioner/pod.yaml: (13.216135089s)
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-20210507215728-391940 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:335: "sp-pod" [6d12ada7-f683-4325-8970-173d8560dd91] Pending
helpers_test.go:335: "sp-pod" [6d12ada7-f683-4325-8970-173d8560dd91] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:335: "sp-pod" [6d12ada7-f683-4325-8970-173d8560dd91] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 7.513792069s
functional_test_pvc_test.go:114: (dbg) Run:  kubectl --context functional-20210507215728-391940 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (35.71s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.56s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1414: (dbg) Run:  out/minikube-linux-amd64 -p functional-20210507215728-391940 ssh "echo hello"

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1431: (dbg) Run:  out/minikube-linux-amd64 -p functional-20210507215728-391940 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.56s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (0.55s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
functional_test.go:1464: (dbg) Run:  out/minikube-linux-amd64 -p functional-20210507215728-391940 cp testdata/cp-test.txt /home/docker/cp-test.txt

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
functional_test.go:1472: (dbg) Run:  out/minikube-linux-amd64 -p functional-20210507215728-391940 ssh "sudo cat /home/docker/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (0.55s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (18.94s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1498: (dbg) Run:  kubectl --context functional-20210507215728-391940 replace --force -f testdata/mysql.yaml

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1503: (dbg) TestFunctional/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...
helpers_test.go:335: "mysql-9bbbc5bbb-gdm8w" [c84d6827-e88b-4ff3-aa25-9a597a30c022] Pending

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
helpers_test.go:335: "mysql-9bbbc5bbb-gdm8w" [c84d6827-e88b-4ff3-aa25-9a597a30c022] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
helpers_test.go:335: "mysql-9bbbc5bbb-gdm8w" [c84d6827-e88b-4ff3-aa25-9a597a30c022] Running

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1503: (dbg) TestFunctional/parallel/MySQL: app=mysql healthy within 15.0284752s
functional_test.go:1510: (dbg) Run:  kubectl --context functional-20210507215728-391940 exec mysql-9bbbc5bbb-gdm8w -- mysql -ppassword -e "show databases;"
functional_test.go:1510: (dbg) Non-zero exit: kubectl --context functional-20210507215728-391940 exec mysql-9bbbc5bbb-gdm8w -- mysql -ppassword -e "show databases;": exit status 1 (201.909691ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1510: (dbg) Run:  kubectl --context functional-20210507215728-391940 exec mysql-9bbbc5bbb-gdm8w -- mysql -ppassword -e "show databases;"
functional_test.go:1510: (dbg) Non-zero exit: kubectl --context functional-20210507215728-391940 exec mysql-9bbbc5bbb-gdm8w -- mysql -ppassword -e "show databases;": exit status 1 (132.608958ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1510: (dbg) Run:  kubectl --context functional-20210507215728-391940 exec mysql-9bbbc5bbb-gdm8w -- mysql -ppassword -e "show databases;"
--- PASS: TestFunctional/parallel/MySQL (18.94s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1594: Checking for existence of /etc/test/nested/copy/391940/hosts within VM
functional_test.go:1595: (dbg) Run:  out/minikube-linux-amd64 -p functional-20210507215728-391940 ssh "sudo cat /etc/test/nested/copy/391940/hosts"
functional_test.go:1600: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.29s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (0.94s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1635: Checking for existence of /etc/ssl/certs/391940.pem within VM

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1636: (dbg) Run:  out/minikube-linux-amd64 -p functional-20210507215728-391940 ssh "sudo cat /etc/ssl/certs/391940.pem"

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1635: Checking for existence of /usr/share/ca-certificates/391940.pem within VM
functional_test.go:1636: (dbg) Run:  out/minikube-linux-amd64 -p functional-20210507215728-391940 ssh "sudo cat /usr/share/ca-certificates/391940.pem"

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1635: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1636: (dbg) Run:  out/minikube-linux-amd64 -p functional-20210507215728-391940 ssh "sudo cat /etc/ssl/certs/51391683.0"
--- PASS: TestFunctional/parallel/CertSync (0.94s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:197: (dbg) Run:  kubectl --context functional-20210507215728-391940 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.08s)

                                                
                                    
x
+
TestFunctional/parallel/LoadImage (2.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/LoadImage
=== PAUSE TestFunctional/parallel/LoadImage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/LoadImage

                                                
                                                
=== CONT  TestFunctional/parallel/LoadImage
functional_test.go:220: (dbg) Run:  docker pull busybox:latest

                                                
                                                
=== CONT  TestFunctional/parallel/LoadImage
functional_test.go:227: (dbg) Run:  docker tag busybox:latest docker.io/library/busybox:load-functional-20210507215728-391940
functional_test.go:233: (dbg) Run:  out/minikube-linux-amd64 -p functional-20210507215728-391940 image load docker.io/library/busybox:load-functional-20210507215728-391940

                                                
                                                
=== CONT  TestFunctional/parallel/LoadImage
functional_test.go:233: (dbg) Done: out/minikube-linux-amd64 -p functional-20210507215728-391940 image load docker.io/library/busybox:load-functional-20210507215728-391940: (1.325960596s)
functional_test.go:303: (dbg) Run:  out/minikube-linux-amd64 ssh -p functional-20210507215728-391940 -- sudo crictl inspecti docker.io/library/busybox:load-functional-20210507215728-391940
--- PASS: TestFunctional/parallel/LoadImage (2.25s)

                                                
                                    
x
+
TestFunctional/parallel/RemoveImage (2.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/RemoveImage
=== PAUSE TestFunctional/parallel/RemoveImage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/RemoveImage
functional_test.go:261: (dbg) Run:  docker pull busybox:latest
functional_test.go:268: (dbg) Run:  docker tag busybox:latest docker.io/library/busybox:remove-functional-20210507215728-391940
functional_test.go:274: (dbg) Run:  out/minikube-linux-amd64 -p functional-20210507215728-391940 image load docker.io/library/busybox:remove-functional-20210507215728-391940

                                                
                                                
=== CONT  TestFunctional/parallel/RemoveImage
functional_test.go:274: (dbg) Done: out/minikube-linux-amd64 -p functional-20210507215728-391940 image load docker.io/library/busybox:remove-functional-20210507215728-391940: (1.107691861s)
functional_test.go:280: (dbg) Run:  out/minikube-linux-amd64 -p functional-20210507215728-391940 image rm docker.io/library/busybox:remove-functional-20210507215728-391940

                                                
                                                
=== CONT  TestFunctional/parallel/RemoveImage
functional_test.go:317: (dbg) Run:  out/minikube-linux-amd64 ssh -p functional-20210507215728-391940 -- sudo crictl images
--- PASS: TestFunctional/parallel/RemoveImage (2.20s)

                                                
                                    
x
+
TestFunctional/parallel/BuildImage (2.77s)

                                                
                                                
=== RUN   TestFunctional/parallel/BuildImage
=== PAUSE TestFunctional/parallel/BuildImage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/BuildImage
functional_test.go:369: (dbg) Run:  out/minikube-linux-amd64 ssh -p functional-20210507215728-391940 -- nohup sudo -b buildkitd --oci-worker=false --containerd-worker=true --containerd-worker-namespace=k8s.io

                                                
                                                
=== CONT  TestFunctional/parallel/BuildImage
functional_test.go:341: (dbg) Run:  out/minikube-linux-amd64 -p functional-20210507215728-391940 image build -t localhost/my-image:functional-20210507215728-391940 testdata/build

                                                
                                                
=== CONT  TestFunctional/parallel/BuildImage
functional_test.go:341: (dbg) Done: out/minikube-linux-amd64 -p functional-20210507215728-391940 image build -t localhost/my-image:functional-20210507215728-391940 testdata/build: (2.084053516s)
functional_test.go:349: (dbg) Stderr: out/minikube-linux-amd64 -p functional-20210507215728-391940 image build -t localhost/my-image:functional-20210507215728-391940 testdata/build:
#1 [internal] load build definition from Dockerfile
#1 sha256:9eb594552e5d025d2bce286b65fad5f2a09934930926162e934caa35114bf81a
#1 transferring dockerfile: 77B done
#1 DONE 0.1s

                                                
                                                
#2 [internal] load .dockerignore
#2 sha256:2ec921ce38b301bbd169ccce5271cb6779e3b68b705446698dd15ac220641ad9
#2 transferring context: 2B done
#2 DONE 0.0s

                                                
                                                
#3 [internal] load metadata for docker.io/library/busybox:latest
#3 sha256:da853382a7535e068feae4d80bdd0ad2567df3d5cd484fd68f919294d091b053
#3 DONE 0.6s

                                                
                                                
#6 [internal] load build context
#6 sha256:be7193bea878d089af2dcd85aea770fc071ecf10080ebfae861e70772d3f96a9
#6 transferring context: 62B done
#6 DONE 0.0s

                                                
                                                
#4 [1/3] FROM docker.io/library/busybox@sha256:be4684e4004560b2cd1f12148b7120b0ea69c385bcc9b12a637537a2c60f97fb
#4 sha256:bf15a20fbfe1748e363d0c6c77a4959ff2d29933fd76edc4d49b2f00250e7594
#4 resolve docker.io/library/busybox@sha256:be4684e4004560b2cd1f12148b7120b0ea69c385bcc9b12a637537a2c60f97fb 0.0s done
#4 DONE 0.1s

                                                
                                                
#5 [2/3] RUN true
#5 sha256:636ef616c628288aead6af6c1eeab0d5b4c4ede932d18b469f3e24de50721e15
#5 DONE 0.4s

                                                
                                                
#7 [3/3] ADD content.txt /
#7 sha256:e3b3e60f4646a43120a1a5d7216d85e18175ec70f8ca036d289b76eebe70ede2
#7 DONE 0.1s

                                                
                                                
#8 exporting to image
#8 sha256:e8c613e07b0b7ff33893b694f7759a10d42e180f2b4dc349fb57dc6b71dcab00
#8 exporting layers
#8 exporting layers 0.1s done
#8 exporting manifest sha256:3ff7e73afa650cb47d12a386d892cd945a93ca0e440a7fa178a1308f0d75c3c5 0.0s done
#8 exporting config sha256:70047817b214f9de3a783d7b5cfc19303ad22cd5213f3314e358ec0689be9012 done
#8 naming to localhost/my-image:functional-20210507215728-391940 done
#8 DONE 0.1s
functional_test.go:303: (dbg) Run:  out/minikube-linux-amd64 ssh -p functional-20210507215728-391940 -- sudo crictl inspecti localhost/my-image:functional-20210507215728-391940
--- PASS: TestFunctional/parallel/BuildImage (2.77s)

                                                
                                    
x
+
TestFunctional/parallel/ListImages (0.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/ListImages
=== PAUSE TestFunctional/parallel/ListImages

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ListImages
functional_test.go:385: (dbg) Run:  out/minikube-linux-amd64 -p functional-20210507215728-391940 image ls

                                                
                                                
=== CONT  TestFunctional/parallel/ListImages
functional_test.go:390: (dbg) Stdout: out/minikube-linux-amd64 -p functional-20210507215728-391940 image ls:
k8s.gcr.io/pause:latest
k8s.gcr.io/pause:3.3
k8s.gcr.io/pause:3.2
k8s.gcr.io/pause:3.1
k8s.gcr.io/kube-scheduler:v1.20.2
k8s.gcr.io/kube-proxy:v1.20.2
k8s.gcr.io/kube-controller-manager:v1.20.2
k8s.gcr.io/kube-apiserver:v1.20.2
k8s.gcr.io/etcd:3.4.13-0
k8s.gcr.io/echoserver:1.8
k8s.gcr.io/coredns:1.7.0
gcr.io/k8s-minikube/storage-provisioner:v5
docker.io/library/minikube-local-cache-test:functional-20210507215728-391940
docker.io/library/busybox:load-functional-20210507215728-391940
docker.io/kubernetesui/metrics-scraper:v1.0.4
docker.io/kubernetesui/dashboard:v2.1.0
docker.io/kindest/kindnetd:v20210326-1e038dc5
docker.io/kindest/kindnetd:v20210220-5b7e6d01
--- PASS: TestFunctional/parallel/ListImages (0.26s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:1729: (dbg) Run:  out/minikube-linux-amd64 -p functional-20210507215728-391940 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.12s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:1729: (dbg) Run:  out/minikube-linux-amd64 -p functional-20210507215728-391940 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.12s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:1729: (dbg) Run:  out/minikube-linux-amd64 -p functional-20210507215728-391940 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.58s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1118: (dbg) Run:  out/minikube-linux-amd64 profile lis
functional_test.go:1122: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.58s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.57s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1156: (dbg) Run:  out/minikube-linux-amd64 profile list

                                                
                                                
=== CONT  TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1161: Took "501.912091ms" to run "out/minikube-linux-amd64 profile list"
functional_test.go:1170: (dbg) Run:  out/minikube-linux-amd64 profile list -l
functional_test.go:1175: Took "71.840413ms" to run "out/minikube-linux-amd64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.57s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.45s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1206: (dbg) Run:  out/minikube-linux-amd64 profile list -o json

                                                
                                                
=== CONT  TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1211: Took "368.393444ms" to run "out/minikube-linux-amd64 profile list -o json"
functional_test.go:1219: (dbg) Run:  out/minikube-linux-amd64 profile list -o json --light
functional_test.go:1224: Took "79.031074ms" to run "out/minikube-linux-amd64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.45s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:126: (dbg) daemon: [out/minikube-linux-amd64 -p functional-20210507215728-391940 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:164: (dbg) Run:  kubectl --context functional-20210507215728-391940 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.05s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:229: tunnel at http://10.102.97.53 is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:364: (dbg) stopping [out/minikube-linux-amd64 -p functional-20210507215728-391940 tunnel --alsologtostderr] ...
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                    
x
+
TestFunctional/delete_busybox_image (0.08s)

                                                
                                                
=== RUN   TestFunctional/delete_busybox_image
functional_test.go:164: (dbg) Run:  docker rmi -f docker.io/library/busybox:load-functional-20210507215728-391940
functional_test.go:169: (dbg) Run:  docker rmi -f docker.io/library/busybox:remove-functional-20210507215728-391940
--- PASS: TestFunctional/delete_busybox_image (0.08s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.04s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:176: (dbg) Run:  docker rmi -f localhost/my-image:functional-20210507215728-391940
--- PASS: TestFunctional/delete_my-image_image (0.04s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.04s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:184: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-20210507215728-391940
--- PASS: TestFunctional/delete_minikube_cached_images (0.04s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.41s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:146: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-error-20210507220532-391940 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:146: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p json-output-error-20210507220532-391940 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (97.998874ms)

                                                
                                                
-- stdout --
	{"data":{"currentstep":"0","message":"[json-output-error-20210507220532-391940] minikube v1.20.0 on Debian 9.13 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"},"datacontenttype":"application/json","id":"ab31f5d7-a561-4530-a939-bea3e6386b6b","source":"https://minikube.sigs.k8s.io/","specversion":"1.0","type":"io.k8s.sigs.minikube.step"}
	{"data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/kubeconfig"},"datacontenttype":"application/json","id":"f35e9b35-87c7-43b5-ba6b-29b5f07ffc05","source":"https://minikube.sigs.k8s.io/","specversion":"1.0","type":"io.k8s.sigs.minikube.info"}
	{"data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"},"datacontenttype":"application/json","id":"b2fe58a9-a8ed-4c53-91d3-cb949223e3c8","source":"https://minikube.sigs.k8s.io/","specversion":"1.0","type":"io.k8s.sigs.minikube.info"}
	{"data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube"},"datacontenttype":"application/json","id":"c156cef0-2391-437c-84f9-3cfc15ca0c3a","source":"https://minikube.sigs.k8s.io/","specversion":"1.0","type":"io.k8s.sigs.minikube.info"}
	{"data":{"message":"MINIKUBE_LOCATION=master"},"datacontenttype":"application/json","id":"afe5b606-480c-4b22-a30d-c2f4e3dcc054","source":"https://minikube.sigs.k8s.io/","specversion":"1.0","type":"io.k8s.sigs.minikube.info"}
	{"data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/amd64","name":"DRV_UNSUPPORTED_OS","url":""},"datacontenttype":"application/json","id":"ac1f48f5-dfe8-46c0-aae8-92545e5f30a5","source":"https://minikube.sigs.k8s.io/","specversion":"1.0","type":"io.k8s.sigs.minikube.error"}

                                                
                                                
-- /stdout --
helpers_test.go:171: Cleaning up "json-output-error-20210507220532-391940" profile ...
helpers_test.go:174: (dbg) Run:  out/minikube-linux-amd64 delete -p json-output-error-20210507220532-391940
--- PASS: TestErrorJSONOutput (0.41s)

                                                
                                    
x
+
TestKicCustomNetwork/create_custom_network (28.54s)

                                                
                                                
=== RUN   TestKicCustomNetwork/create_custom_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-amd64 start -p docker-network-20210507220532-391940 --network=
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-amd64 start -p docker-network-20210507220532-391940 --network=: (26.356092993s)
kic_custom_network_test.go:101: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:171: Cleaning up "docker-network-20210507220532-391940" profile ...
helpers_test.go:174: (dbg) Run:  out/minikube-linux-amd64 delete -p docker-network-20210507220532-391940
helpers_test.go:174: (dbg) Done: out/minikube-linux-amd64 delete -p docker-network-20210507220532-391940: (2.144656019s)
--- PASS: TestKicCustomNetwork/create_custom_network (28.54s)

                                                
                                    
x
+
TestKicCustomNetwork/use_default_bridge_network (25.23s)

                                                
                                                
=== RUN   TestKicCustomNetwork/use_default_bridge_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-amd64 start -p docker-network-20210507220601-391940 --network=bridge
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-amd64 start -p docker-network-20210507220601-391940 --network=bridge: (22.844981975s)
kic_custom_network_test.go:101: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:171: Cleaning up "docker-network-20210507220601-391940" profile ...
helpers_test.go:174: (dbg) Run:  out/minikube-linux-amd64 delete -p docker-network-20210507220601-391940
helpers_test.go:174: (dbg) Done: out/minikube-linux-amd64 delete -p docker-network-20210507220601-391940: (2.3482113s)
--- PASS: TestKicCustomNetwork/use_default_bridge_network (25.23s)

                                                
                                    
x
+
TestKicExistingNetwork (24.66s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:101: (dbg) Run:  docker network ls --format {{.Name}}
kic_custom_network_test.go:93: (dbg) Run:  out/minikube-linux-amd64 start -p existing-network-20210507220626-391940 --network=existing-network
kic_custom_network_test.go:93: (dbg) Done: out/minikube-linux-amd64 start -p existing-network-20210507220626-391940 --network=existing-network: (21.832666547s)
helpers_test.go:171: Cleaning up "existing-network-20210507220626-391940" profile ...
helpers_test.go:174: (dbg) Run:  out/minikube-linux-amd64 delete -p existing-network-20210507220626-391940
helpers_test.go:174: (dbg) Done: out/minikube-linux-amd64 delete -p existing-network-20210507220626-391940: (2.510194459s)
kic_custom_network_test.go:82: error deleting kic network, may need to delete manually: [unable to delete a network that is attached to a running container]
--- PASS: TestKicExistingNetwork (24.66s)

                                                
                                    
x
+
TestMainNoArgs (0.06s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-linux-amd64
--- PASS: TestMainNoArgs (0.06s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (177.4s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:76: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-20210507220651-391940 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker  --container-runtime=containerd
E0507 22:06:59.411601  391940 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/profiles/functional-20210507215728-391940/client.crt: no such file or directory
E0507 22:06:59.416897  391940 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/profiles/functional-20210507215728-391940/client.crt: no such file or directory
E0507 22:06:59.427132  391940 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/profiles/functional-20210507215728-391940/client.crt: no such file or directory
E0507 22:06:59.447365  391940 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/profiles/functional-20210507215728-391940/client.crt: no such file or directory
E0507 22:06:59.487609  391940 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/profiles/functional-20210507215728-391940/client.crt: no such file or directory
E0507 22:06:59.567883  391940 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/profiles/functional-20210507215728-391940/client.crt: no such file or directory
E0507 22:06:59.728257  391940 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/profiles/functional-20210507215728-391940/client.crt: no such file or directory
E0507 22:07:00.048981  391940 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/profiles/functional-20210507215728-391940/client.crt: no such file or directory
E0507 22:07:00.689932  391940 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/profiles/functional-20210507215728-391940/client.crt: no such file or directory
E0507 22:07:01.970967  391940 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/profiles/functional-20210507215728-391940/client.crt: no such file or directory
E0507 22:07:04.531539  391940 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/profiles/functional-20210507215728-391940/client.crt: no such file or directory
E0507 22:07:09.652146  391940 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/profiles/functional-20210507215728-391940/client.crt: no such file or directory
E0507 22:07:19.893083  391940 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/profiles/functional-20210507215728-391940/client.crt: no such file or directory
E0507 22:07:40.373219  391940 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/profiles/functional-20210507215728-391940/client.crt: no such file or directory
E0507 22:08:17.776899  391940 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/profiles/addons-20210507215008-391940/client.crt: no such file or directory
E0507 22:08:21.334151  391940 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/profiles/functional-20210507215728-391940/client.crt: no such file or directory
E0507 22:09:43.255194  391940 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/profiles/functional-20210507215728-391940/client.crt: no such file or directory
multinode_test.go:76: (dbg) Done: out/minikube-linux-amd64 start -p multinode-20210507220651-391940 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker  --container-runtime=containerd: (2m56.887864432s)
multinode_test.go:82: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20210507220651-391940 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (177.40s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (4.35s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:404: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-20210507220651-391940 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:409: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-20210507220651-391940 -- rollout status deployment/busybox
multinode_test.go:409: (dbg) Done: out/minikube-linux-amd64 kubectl -p multinode-20210507220651-391940 -- rollout status deployment/busybox: (2.285887306s)
multinode_test.go:415: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-20210507220651-391940 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:427: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-20210507220651-391940 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:435: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-20210507220651-391940 -- exec busybox-6cd5ff77cb-n6bwx -- nslookup kubernetes.io
multinode_test.go:435: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-20210507220651-391940 -- exec busybox-6cd5ff77cb-w7j2s -- nslookup kubernetes.io
multinode_test.go:444: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-20210507220651-391940 -- exec busybox-6cd5ff77cb-n6bwx -- nslookup kubernetes.default
multinode_test.go:444: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-20210507220651-391940 -- exec busybox-6cd5ff77cb-w7j2s -- nslookup kubernetes.default
multinode_test.go:451: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-20210507220651-391940 -- exec busybox-6cd5ff77cb-n6bwx -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:451: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-20210507220651-391940 -- exec busybox-6cd5ff77cb-w7j2s -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (4.35s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (43.59s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:101: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-20210507220651-391940 -v 3 --alsologtostderr
multinode_test.go:101: (dbg) Done: out/minikube-linux-amd64 node add -p multinode-20210507220651-391940 -v 3 --alsologtostderr: (42.866523588s)
multinode_test.go:107: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20210507220651-391940 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (43.59s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.3s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:123: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.30s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.45s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:163: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20210507220651-391940 node stop m03
multinode_test.go:163: (dbg) Done: out/minikube-linux-amd64 -p multinode-20210507220651-391940 node stop m03: (1.319644578s)
multinode_test.go:169: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20210507220651-391940 status
multinode_test.go:169: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-20210507220651-391940 status: exit status 7 (573.975788ms)

                                                
                                                
-- stdout --
	multinode-20210507220651-391940
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-20210507220651-391940-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-20210507220651-391940-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:176: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20210507220651-391940 status --alsologtostderr
multinode_test.go:176: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-20210507220651-391940 status --alsologtostderr: exit status 7 (557.394369ms)

                                                
                                                
-- stdout --
	multinode-20210507220651-391940
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-20210507220651-391940-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-20210507220651-391940-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0507 22:10:38.940588  483365 out.go:291] Setting OutFile to fd 1 ...
	I0507 22:10:38.940697  483365 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0507 22:10:38.940708  483365 out.go:304] Setting ErrFile to fd 2...
	I0507 22:10:38.940712  483365 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0507 22:10:38.940808  483365 root.go:316] Updating PATH: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/bin
	I0507 22:10:38.940996  483365 out.go:298] Setting JSON to false
	I0507 22:10:38.941019  483365 mustload.go:65] Loading cluster: multinode-20210507220651-391940
	I0507 22:10:38.941280  483365 status.go:253] checking status of multinode-20210507220651-391940 ...
	I0507 22:10:38.941701  483365 cli_runner.go:115] Run: docker container inspect multinode-20210507220651-391940 --format={{.State.Status}}
	I0507 22:10:38.981126  483365 status.go:328] multinode-20210507220651-391940 host status = "Running" (err=<nil>)
	I0507 22:10:38.981152  483365 host.go:66] Checking if "multinode-20210507220651-391940" exists ...
	I0507 22:10:38.981390  483365 cli_runner.go:115] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-20210507220651-391940
	I0507 22:10:39.017933  483365 host.go:66] Checking if "multinode-20210507220651-391940" exists ...
	I0507 22:10:39.018235  483365 ssh_runner.go:149] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0507 22:10:39.018279  483365 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20210507220651-391940
	I0507 22:10:39.055325  483365 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33134 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/machines/multinode-20210507220651-391940/id_rsa Username:docker}
	I0507 22:10:39.143868  483365 ssh_runner.go:149] Run: systemctl --version
	I0507 22:10:39.147498  483365 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service kubelet
	I0507 22:10:39.156783  483365 kubeconfig.go:93] found "multinode-20210507220651-391940" server: "https://192.168.58.2:8443"
	I0507 22:10:39.156807  483365 api_server.go:148] Checking apiserver status ...
	I0507 22:10:39.156834  483365 ssh_runner.go:149] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0507 22:10:39.174718  483365 ssh_runner.go:149] Run: sudo egrep ^[0-9]+:freezer: /proc/1073/cgroup
	I0507 22:10:39.181377  483365 api_server.go:164] apiserver freezer: "8:freezer:/docker/3e87985ac3319dca529cb14dfcc5c6349eb3e4097a5de78dccbe1a9c1d6c88ed/kubepods/burstable/pod2646d64d150a66972245a0ba74a26943/8fa65ea5262a6cbb08af051947d06beed1a98ca396d482b6153b5102688378df"
	I0507 22:10:39.181428  483365 ssh_runner.go:149] Run: sudo cat /sys/fs/cgroup/freezer/docker/3e87985ac3319dca529cb14dfcc5c6349eb3e4097a5de78dccbe1a9c1d6c88ed/kubepods/burstable/pod2646d64d150a66972245a0ba74a26943/8fa65ea5262a6cbb08af051947d06beed1a98ca396d482b6153b5102688378df/freezer.state
	I0507 22:10:39.187199  483365 api_server.go:186] freezer state: "THAWED"
	I0507 22:10:39.187248  483365 api_server.go:223] Checking apiserver healthz at https://192.168.58.2:8443/healthz ...
	I0507 22:10:39.191939  483365 api_server.go:249] https://192.168.58.2:8443/healthz returned 200:
	ok
	I0507 22:10:39.191958  483365 status.go:419] multinode-20210507220651-391940 apiserver status = Running (err=<nil>)
	I0507 22:10:39.191967  483365 status.go:255] multinode-20210507220651-391940 status: &{Name:multinode-20210507220651-391940 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0507 22:10:39.191987  483365 status.go:253] checking status of multinode-20210507220651-391940-m02 ...
	I0507 22:10:39.192211  483365 cli_runner.go:115] Run: docker container inspect multinode-20210507220651-391940-m02 --format={{.State.Status}}
	I0507 22:10:39.229873  483365 status.go:328] multinode-20210507220651-391940-m02 host status = "Running" (err=<nil>)
	I0507 22:10:39.229898  483365 host.go:66] Checking if "multinode-20210507220651-391940-m02" exists ...
	I0507 22:10:39.230165  483365 cli_runner.go:115] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-20210507220651-391940-m02
	I0507 22:10:39.267382  483365 host.go:66] Checking if "multinode-20210507220651-391940-m02" exists ...
	I0507 22:10:39.267704  483365 ssh_runner.go:149] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0507 22:10:39.267749  483365 cli_runner.go:115] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-20210507220651-391940-m02
	I0507 22:10:39.304365  483365 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33139 SSHKeyPath:/home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/machines/multinode-20210507220651-391940-m02/id_rsa Username:docker}
	I0507 22:10:39.391615  483365 ssh_runner.go:149] Run: sudo systemctl is-active --quiet service kubelet
	I0507 22:10:39.400056  483365 status.go:255] multinode-20210507220651-391940-m02 status: &{Name:multinode-20210507220651-391940-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I0507 22:10:39.400089  483365 status.go:253] checking status of multinode-20210507220651-391940-m03 ...
	I0507 22:10:39.400347  483365 cli_runner.go:115] Run: docker container inspect multinode-20210507220651-391940-m03 --format={{.State.Status}}
	I0507 22:10:39.438708  483365 status.go:328] multinode-20210507220651-391940-m03 host status = "Stopped" (err=<nil>)
	I0507 22:10:39.438734  483365 status.go:341] host is not running, skipping remaining checks
	I0507 22:10:39.438739  483365 status.go:255] multinode-20210507220651-391940-m03 status: &{Name:multinode-20210507220651-391940-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.45s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (35.46s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:197: (dbg) Run:  docker version -f {{.Server.Version}}
multinode_test.go:207: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20210507220651-391940 node start m03 --alsologtostderr
multinode_test.go:207: (dbg) Done: out/minikube-linux-amd64 -p multinode-20210507220651-391940 node start m03 --alsologtostderr: (34.635659666s)
multinode_test.go:214: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20210507220651-391940 status
multinode_test.go:228: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (35.46s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (5.48s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:317: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20210507220651-391940 node delete m03
multinode_test.go:317: (dbg) Done: out/minikube-linux-amd64 -p multinode-20210507220651-391940 node delete m03: (4.802425419s)
multinode_test.go:323: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20210507220651-391940 status --alsologtostderr
multinode_test.go:337: (dbg) Run:  docker volume ls
multinode_test.go:347: (dbg) Run:  kubectl get nodes
multinode_test.go:355: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (5.48s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (41.55s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:237: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20210507220651-391940 stop
E0507 22:11:59.412377  391940 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/profiles/functional-20210507215728-391940/client.crt: no such file or directory
multinode_test.go:237: (dbg) Done: out/minikube-linux-amd64 -p multinode-20210507220651-391940 stop: (41.284314891s)
multinode_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20210507220651-391940 status
multinode_test.go:243: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-20210507220651-391940 status: exit status 7 (137.348592ms)

                                                
                                                
-- stdout --
	multinode-20210507220651-391940
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-20210507220651-391940-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:250: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20210507220651-391940 status --alsologtostderr
multinode_test.go:250: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-20210507220651-391940 status --alsologtostderr: exit status 7 (130.59219ms)

                                                
                                                
-- stdout --
	multinode-20210507220651-391940
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-20210507220651-391940-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0507 22:12:01.858361  487075 out.go:291] Setting OutFile to fd 1 ...
	I0507 22:12:01.858536  487075 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0507 22:12:01.858545  487075 out.go:304] Setting ErrFile to fd 2...
	I0507 22:12:01.858548  487075 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0507 22:12:01.858633  487075 root.go:316] Updating PATH: /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/bin
	I0507 22:12:01.858776  487075 out.go:298] Setting JSON to false
	I0507 22:12:01.858795  487075 mustload.go:65] Loading cluster: multinode-20210507220651-391940
	I0507 22:12:01.859033  487075 status.go:253] checking status of multinode-20210507220651-391940 ...
	I0507 22:12:01.859384  487075 cli_runner.go:115] Run: docker container inspect multinode-20210507220651-391940 --format={{.State.Status}}
	I0507 22:12:01.896299  487075 status.go:328] multinode-20210507220651-391940 host status = "Stopped" (err=<nil>)
	I0507 22:12:01.896335  487075 status.go:341] host is not running, skipping remaining checks
	I0507 22:12:01.896341  487075 status.go:255] multinode-20210507220651-391940 status: &{Name:multinode-20210507220651-391940 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0507 22:12:01.896395  487075 status.go:253] checking status of multinode-20210507220651-391940-m02 ...
	I0507 22:12:01.896641  487075 cli_runner.go:115] Run: docker container inspect multinode-20210507220651-391940-m02 --format={{.State.Status}}
	I0507 22:12:01.932850  487075 status.go:328] multinode-20210507220651-391940-m02 host status = "Stopped" (err=<nil>)
	I0507 22:12:01.932871  487075 status.go:341] host is not running, skipping remaining checks
	I0507 22:12:01.932877  487075 status.go:255] multinode-20210507220651-391940-m02 status: &{Name:multinode-20210507220651-391940-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (41.55s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (149.83s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:267: (dbg) Run:  docker version -f {{.Server.Version}}
multinode_test.go:277: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-20210507220651-391940 --wait=true -v=8 --alsologtostderr --driver=docker  --container-runtime=containerd
E0507 22:12:27.095937  391940 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/profiles/functional-20210507215728-391940/client.crt: no such file or directory
E0507 22:13:17.776410  391940 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/profiles/addons-20210507215008-391940/client.crt: no such file or directory
multinode_test.go:277: (dbg) Done: out/minikube-linux-amd64 start -p multinode-20210507220651-391940 --wait=true -v=8 --alsologtostderr --driver=docker  --container-runtime=containerd: (2m29.1286647s)
multinode_test.go:283: (dbg) Run:  out/minikube-linux-amd64 -p multinode-20210507220651-391940 status --alsologtostderr
multinode_test.go:297: (dbg) Run:  kubectl get nodes
multinode_test.go:305: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (149.83s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (49.38s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:366: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-20210507220651-391940
multinode_test.go:375: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-20210507220651-391940-m02 --driver=docker  --container-runtime=containerd
multinode_test.go:375: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p multinode-20210507220651-391940-m02 --driver=docker  --container-runtime=containerd: exit status 14 (105.907871ms)

                                                
                                                
-- stdout --
	* [multinode-20210507220651-391940-m02] minikube v1.20.0 on Debian 9.13 (kvm/amd64)
	  - KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/kubeconfig
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube
	  - MINIKUBE_LOCATION=master
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-20210507220651-391940-m02' is duplicated with machine name 'multinode-20210507220651-391940-m02' in profile 'multinode-20210507220651-391940'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:383: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-20210507220651-391940-m03 --driver=docker  --container-runtime=containerd
E0507 22:14:40.821140  391940 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/profiles/addons-20210507215008-391940/client.crt: no such file or directory
multinode_test.go:383: (dbg) Done: out/minikube-linux-amd64 start -p multinode-20210507220651-391940-m03 --driver=docker  --container-runtime=containerd: (46.645382456s)
multinode_test.go:390: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-20210507220651-391940
multinode_test.go:390: (dbg) Non-zero exit: out/minikube-linux-amd64 node add -p multinode-20210507220651-391940: exit status 80 (274.51484ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-20210507220651-391940
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: Node multinode-20210507220651-391940-m03 already exists in multinode-20210507220651-391940-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────╮
	│                                                                             │
	│    * If the above advice does not help, please let us know:                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose               │
	│                                                                             │
	│    * Please attach the following file to the GitHub issue:                  │
	│    * - /tmp/minikube_node_5d50ea0fe0ecd435d89f51fbcdcec837640ed6a1_0.log    │
	│                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────╯
	

                                                
                                                
** /stderr **
multinode_test.go:395: (dbg) Run:  out/minikube-linux-amd64 delete -p multinode-20210507220651-391940-m03
multinode_test.go:395: (dbg) Done: out/minikube-linux-amd64 delete -p multinode-20210507220651-391940-m03: (2.296185405s)
--- PASS: TestMultiNode/serial/ValidateNameConflict (49.38s)

                                                
                                    
x
+
TestDebPackageInstall/install_amd64_debian:sid/minikube (0s)

                                                
                                                
=== RUN   TestDebPackageInstall/install_amd64_debian:sid/minikube
--- PASS: TestDebPackageInstall/install_amd64_debian:sid/minikube (0.00s)

                                                
                                    
x
+
TestDebPackageInstall/install_amd64_debian:sid/kvm2-driver (10.69s)

                                                
                                                
=== RUN   TestDebPackageInstall/install_amd64_debian:sid/kvm2-driver
pkg_install_test.go:104: (dbg) Run:  docker run --rm -v/home/jenkins/workspace/Docker_Linux_containerd_integration/out:/var/tmp debian:sid sh -c "apt-get update; apt-get install -y libvirt0; dpkg -i /var/tmp/docker-machine-driver-kvm2_1.20.0-0_amd64.deb"
pkg_install_test.go:104: (dbg) Done: docker run --rm -v/home/jenkins/workspace/Docker_Linux_containerd_integration/out:/var/tmp debian:sid sh -c "apt-get update; apt-get install -y libvirt0; dpkg -i /var/tmp/docker-machine-driver-kvm2_1.20.0-0_amd64.deb": (10.685124162s)
--- PASS: TestDebPackageInstall/install_amd64_debian:sid/kvm2-driver (10.69s)

                                                
                                    
x
+
TestDebPackageInstall/install_amd64_debian:latest/minikube (0s)

                                                
                                                
=== RUN   TestDebPackageInstall/install_amd64_debian:latest/minikube
--- PASS: TestDebPackageInstall/install_amd64_debian:latest/minikube (0.00s)

                                                
                                    
x
+
TestDebPackageInstall/install_amd64_debian:latest/kvm2-driver (9.79s)

                                                
                                                
=== RUN   TestDebPackageInstall/install_amd64_debian:latest/kvm2-driver
pkg_install_test.go:104: (dbg) Run:  docker run --rm -v/home/jenkins/workspace/Docker_Linux_containerd_integration/out:/var/tmp debian:latest sh -c "apt-get update; apt-get install -y libvirt0; dpkg -i /var/tmp/docker-machine-driver-kvm2_1.20.0-0_amd64.deb"
pkg_install_test.go:104: (dbg) Done: docker run --rm -v/home/jenkins/workspace/Docker_Linux_containerd_integration/out:/var/tmp debian:latest sh -c "apt-get update; apt-get install -y libvirt0; dpkg -i /var/tmp/docker-machine-driver-kvm2_1.20.0-0_amd64.deb": (9.786481117s)
--- PASS: TestDebPackageInstall/install_amd64_debian:latest/kvm2-driver (9.79s)

                                                
                                    
x
+
TestDebPackageInstall/install_amd64_debian:10/minikube (0s)

                                                
                                                
=== RUN   TestDebPackageInstall/install_amd64_debian:10/minikube
--- PASS: TestDebPackageInstall/install_amd64_debian:10/minikube (0.00s)

                                                
                                    
x
+
TestDebPackageInstall/install_amd64_debian:10/kvm2-driver (9.72s)

                                                
                                                
=== RUN   TestDebPackageInstall/install_amd64_debian:10/kvm2-driver
pkg_install_test.go:104: (dbg) Run:  docker run --rm -v/home/jenkins/workspace/Docker_Linux_containerd_integration/out:/var/tmp debian:10 sh -c "apt-get update; apt-get install -y libvirt0; dpkg -i /var/tmp/docker-machine-driver-kvm2_1.20.0-0_amd64.deb"
pkg_install_test.go:104: (dbg) Done: docker run --rm -v/home/jenkins/workspace/Docker_Linux_containerd_integration/out:/var/tmp debian:10 sh -c "apt-get update; apt-get install -y libvirt0; dpkg -i /var/tmp/docker-machine-driver-kvm2_1.20.0-0_amd64.deb": (9.721428033s)
--- PASS: TestDebPackageInstall/install_amd64_debian:10/kvm2-driver (9.72s)

                                                
                                    
x
+
TestDebPackageInstall/install_amd64_debian:9/minikube (0s)

                                                
                                                
=== RUN   TestDebPackageInstall/install_amd64_debian:9/minikube
--- PASS: TestDebPackageInstall/install_amd64_debian:9/minikube (0.00s)

                                                
                                    
x
+
TestDebPackageInstall/install_amd64_debian:9/kvm2-driver (8.11s)

                                                
                                                
=== RUN   TestDebPackageInstall/install_amd64_debian:9/kvm2-driver
pkg_install_test.go:104: (dbg) Run:  docker run --rm -v/home/jenkins/workspace/Docker_Linux_containerd_integration/out:/var/tmp debian:9 sh -c "apt-get update; apt-get install -y libvirt0; dpkg -i /var/tmp/docker-machine-driver-kvm2_1.20.0-0_amd64.deb"
pkg_install_test.go:104: (dbg) Done: docker run --rm -v/home/jenkins/workspace/Docker_Linux_containerd_integration/out:/var/tmp debian:9 sh -c "apt-get update; apt-get install -y libvirt0; dpkg -i /var/tmp/docker-machine-driver-kvm2_1.20.0-0_amd64.deb": (8.112034527s)
--- PASS: TestDebPackageInstall/install_amd64_debian:9/kvm2-driver (8.11s)

                                                
                                    
x
+
TestDebPackageInstall/install_amd64_ubuntu:latest/minikube (0s)

                                                
                                                
=== RUN   TestDebPackageInstall/install_amd64_ubuntu:latest/minikube
--- PASS: TestDebPackageInstall/install_amd64_ubuntu:latest/minikube (0.00s)

                                                
                                    
x
+
TestDebPackageInstall/install_amd64_ubuntu:latest/kvm2-driver (16.03s)

                                                
                                                
=== RUN   TestDebPackageInstall/install_amd64_ubuntu:latest/kvm2-driver
pkg_install_test.go:104: (dbg) Run:  docker run --rm -v/home/jenkins/workspace/Docker_Linux_containerd_integration/out:/var/tmp ubuntu:latest sh -c "apt-get update; apt-get install -y libvirt0; dpkg -i /var/tmp/docker-machine-driver-kvm2_1.20.0-0_amd64.deb"
pkg_install_test.go:104: (dbg) Done: docker run --rm -v/home/jenkins/workspace/Docker_Linux_containerd_integration/out:/var/tmp ubuntu:latest sh -c "apt-get update; apt-get install -y libvirt0; dpkg -i /var/tmp/docker-machine-driver-kvm2_1.20.0-0_amd64.deb": (16.028776212s)
--- PASS: TestDebPackageInstall/install_amd64_ubuntu:latest/kvm2-driver (16.03s)

                                                
                                    
x
+
TestDebPackageInstall/install_amd64_ubuntu:20.10/minikube (0s)

                                                
                                                
=== RUN   TestDebPackageInstall/install_amd64_ubuntu:20.10/minikube
--- PASS: TestDebPackageInstall/install_amd64_ubuntu:20.10/minikube (0.00s)

                                                
                                    
x
+
TestDebPackageInstall/install_amd64_ubuntu:20.10/kvm2-driver (13.33s)

                                                
                                                
=== RUN   TestDebPackageInstall/install_amd64_ubuntu:20.10/kvm2-driver
pkg_install_test.go:104: (dbg) Run:  docker run --rm -v/home/jenkins/workspace/Docker_Linux_containerd_integration/out:/var/tmp ubuntu:20.10 sh -c "apt-get update; apt-get install -y libvirt0; dpkg -i /var/tmp/docker-machine-driver-kvm2_1.20.0-0_amd64.deb"
pkg_install_test.go:104: (dbg) Done: docker run --rm -v/home/jenkins/workspace/Docker_Linux_containerd_integration/out:/var/tmp ubuntu:20.10 sh -c "apt-get update; apt-get install -y libvirt0; dpkg -i /var/tmp/docker-machine-driver-kvm2_1.20.0-0_amd64.deb": (13.331297713s)
--- PASS: TestDebPackageInstall/install_amd64_ubuntu:20.10/kvm2-driver (13.33s)

                                                
                                    
x
+
TestDebPackageInstall/install_amd64_ubuntu:20.04/minikube (0s)

                                                
                                                
=== RUN   TestDebPackageInstall/install_amd64_ubuntu:20.04/minikube
--- PASS: TestDebPackageInstall/install_amd64_ubuntu:20.04/minikube (0.00s)

                                                
                                    
x
+
TestDebPackageInstall/install_amd64_ubuntu:20.04/kvm2-driver (14.52s)

                                                
                                                
=== RUN   TestDebPackageInstall/install_amd64_ubuntu:20.04/kvm2-driver
pkg_install_test.go:104: (dbg) Run:  docker run --rm -v/home/jenkins/workspace/Docker_Linux_containerd_integration/out:/var/tmp ubuntu:20.04 sh -c "apt-get update; apt-get install -y libvirt0; dpkg -i /var/tmp/docker-machine-driver-kvm2_1.20.0-0_amd64.deb"
pkg_install_test.go:104: (dbg) Done: docker run --rm -v/home/jenkins/workspace/Docker_Linux_containerd_integration/out:/var/tmp ubuntu:20.04 sh -c "apt-get update; apt-get install -y libvirt0; dpkg -i /var/tmp/docker-machine-driver-kvm2_1.20.0-0_amd64.deb": (14.524606265s)
--- PASS: TestDebPackageInstall/install_amd64_ubuntu:20.04/kvm2-driver (14.52s)

                                                
                                    
x
+
TestDebPackageInstall/install_amd64_ubuntu:18.04/minikube (0s)

                                                
                                                
=== RUN   TestDebPackageInstall/install_amd64_ubuntu:18.04/minikube
--- PASS: TestDebPackageInstall/install_amd64_ubuntu:18.04/minikube (0.00s)

                                                
                                    
x
+
TestDebPackageInstall/install_amd64_ubuntu:18.04/kvm2-driver (12.61s)

                                                
                                                
=== RUN   TestDebPackageInstall/install_amd64_ubuntu:18.04/kvm2-driver
pkg_install_test.go:104: (dbg) Run:  docker run --rm -v/home/jenkins/workspace/Docker_Linux_containerd_integration/out:/var/tmp ubuntu:18.04 sh -c "apt-get update; apt-get install -y libvirt0; dpkg -i /var/tmp/docker-machine-driver-kvm2_1.20.0-0_amd64.deb"
E0507 22:16:59.411352  391940 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/profiles/functional-20210507215728-391940/client.crt: no such file or directory
pkg_install_test.go:104: (dbg) Done: docker run --rm -v/home/jenkins/workspace/Docker_Linux_containerd_integration/out:/var/tmp ubuntu:18.04 sh -c "apt-get update; apt-get install -y libvirt0; dpkg -i /var/tmp/docker-machine-driver-kvm2_1.20.0-0_amd64.deb": (12.61121609s)
--- PASS: TestDebPackageInstall/install_amd64_ubuntu:18.04/kvm2-driver (12.61s)

                                                
                                    
x
+
TestPreload (133.16s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:48: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-20210507221700-391940 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.17.0
E0507 22:18:17.776519  391940 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/profiles/addons-20210507215008-391940/client.crt: no such file or directory
preload_test.go:48: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-20210507221700-391940 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.17.0: (1m29.52970925s)
preload_test.go:61: (dbg) Run:  out/minikube-linux-amd64 ssh -p test-preload-20210507221700-391940 -- sudo crictl pull busybox
preload_test.go:61: (dbg) Done: out/minikube-linux-amd64 ssh -p test-preload-20210507221700-391940 -- sudo crictl pull busybox: (1.116569677s)
preload_test.go:71: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-20210507221700-391940 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=containerd --kubernetes-version=v1.17.3
preload_test.go:71: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-20210507221700-391940 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=containerd --kubernetes-version=v1.17.3: (39.447351953s)
preload_test.go:80: (dbg) Run:  out/minikube-linux-amd64 ssh -p test-preload-20210507221700-391940 -- sudo crictl image ls
helpers_test.go:171: Cleaning up "test-preload-20210507221700-391940" profile ...
helpers_test.go:174: (dbg) Run:  out/minikube-linux-amd64 delete -p test-preload-20210507221700-391940
helpers_test.go:174: (dbg) Done: out/minikube-linux-amd64 delete -p test-preload-20210507221700-391940: (2.789211788s)
--- PASS: TestPreload (133.16s)

                                                
                                    
x
+
TestScheduledStopUnix (71.25s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:126: (dbg) Run:  out/minikube-linux-amd64 start -p scheduled-stop-20210507221913-391940 --memory=2048 --driver=docker  --container-runtime=containerd
scheduled_stop_test.go:126: (dbg) Done: out/minikube-linux-amd64 start -p scheduled-stop-20210507221913-391940 --memory=2048 --driver=docker  --container-runtime=containerd: (46.879518455s)
scheduled_stop_test.go:135: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-20210507221913-391940 --schedule 5m
scheduled_stop_test.go:189: (dbg) Run:  out/minikube-linux-amd64 status --format={{.TimeToStop}} -p scheduled-stop-20210507221913-391940 -n scheduled-stop-20210507221913-391940
scheduled_stop_test.go:167: signal error was:  <nil>
scheduled_stop_test.go:135: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-20210507221913-391940 --schedule 8s
scheduled_stop_test.go:167: signal error was:  os: process already finished
scheduled_stop_test.go:135: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-20210507221913-391940 --cancel-scheduled
scheduled_stop_test.go:174: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-20210507221913-391940 -n scheduled-stop-20210507221913-391940
scheduled_stop_test.go:203: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-20210507221913-391940
scheduled_stop_test.go:135: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-20210507221913-391940 --schedule 5s
scheduled_stop_test.go:167: signal error was:  os: process already finished
scheduled_stop_test.go:203: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-20210507221913-391940
scheduled_stop_test.go:174: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-20210507221913-391940 -n scheduled-stop-20210507221913-391940
scheduled_stop_test.go:174: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-20210507221913-391940 -n scheduled-stop-20210507221913-391940
scheduled_stop_test.go:174: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-20210507221913-391940 -n scheduled-stop-20210507221913-391940
scheduled_stop_test.go:174: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-20210507221913-391940 -n scheduled-stop-20210507221913-391940
scheduled_stop_test.go:174: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-20210507221913-391940 -n scheduled-stop-20210507221913-391940
scheduled_stop_test.go:174: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-20210507221913-391940 -n scheduled-stop-20210507221913-391940: exit status 7 (97.064306ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:174: status error: exit status 7 (may be ok)
helpers_test.go:171: Cleaning up "scheduled-stop-20210507221913-391940" profile ...
helpers_test.go:174: (dbg) Run:  out/minikube-linux-amd64 delete -p scheduled-stop-20210507221913-391940
helpers_test.go:174: (dbg) Done: out/minikube-linux-amd64 delete -p scheduled-stop-20210507221913-391940: (2.106647556s)
--- PASS: TestScheduledStopUnix (71.25s)

                                                
                                    
x
+
TestInsufficientStorage (8.88s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:50: (dbg) Run:  out/minikube-linux-amd64 start -p insufficient-storage-20210507222025-391940 --memory=2048 --output=json --wait=true --driver=docker  --container-runtime=containerd
status_test.go:50: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p insufficient-storage-20210507222025-391940 --memory=2048 --output=json --wait=true --driver=docker  --container-runtime=containerd: exit status 26 (6.066984227s)

                                                
                                                
-- stdout --
	{"data":{"currentstep":"0","message":"[insufficient-storage-20210507222025-391940] minikube v1.20.0 on Debian 9.13 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"},"datacontenttype":"application/json","id":"a2d35724-0360-4159-ab8c-8e8c11cf2d8b","source":"https://minikube.sigs.k8s.io/","specversion":"1.0","type":"io.k8s.sigs.minikube.step"}
	{"data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/kubeconfig"},"datacontenttype":"application/json","id":"306106c7-ebcc-40d5-9a9c-94f38a03e85d","source":"https://minikube.sigs.k8s.io/","specversion":"1.0","type":"io.k8s.sigs.minikube.info"}
	{"data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"},"datacontenttype":"application/json","id":"e254b9cd-53d2-4bf1-aabc-ab7f626758aa","source":"https://minikube.sigs.k8s.io/","specversion":"1.0","type":"io.k8s.sigs.minikube.info"}
	{"data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube"},"datacontenttype":"application/json","id":"a75ce941-af16-40ab-88b0-fc4cb4954fde","source":"https://minikube.sigs.k8s.io/","specversion":"1.0","type":"io.k8s.sigs.minikube.info"}
	{"data":{"message":"MINIKUBE_LOCATION=master"},"datacontenttype":"application/json","id":"ff6bf701-7039-408b-92fb-7a3c726c45a5","source":"https://minikube.sigs.k8s.io/","specversion":"1.0","type":"io.k8s.sigs.minikube.info"}
	{"data":{"message":"MINIKUBE_TEST_STORAGE_CAPACITY=100"},"datacontenttype":"application/json","id":"9a99ef5f-99ed-4658-bf9b-afd208aa3311","source":"https://minikube.sigs.k8s.io/","specversion":"1.0","type":"io.k8s.sigs.minikube.info"}
	{"data":{"currentstep":"1","message":"Using the docker driver based on user configuration","name":"Selecting Driver","totalsteps":"19"},"datacontenttype":"application/json","id":"3ed296c9-790e-4bc4-8870-d631a1d4932f","source":"https://minikube.sigs.k8s.io/","specversion":"1.0","type":"io.k8s.sigs.minikube.step"}
	{"data":{"message":"Your cgroup does not allow setting memory."},"datacontenttype":"application/json","id":"05aed3f7-3dcc-4263-8852-e4f552f0b240","source":"https://minikube.sigs.k8s.io/","specversion":"1.0","type":"io.k8s.sigs.minikube.warning"}
	{"data":{"message":"More information: https://docs.docker.com/engine/install/linux-postinstall/#your-kernel-does-not-support-cgroup-swap-limit-capabilities"},"datacontenttype":"application/json","id":"132aba9c-1e97-47a5-85b4-cebe617df81f","source":"https://minikube.sigs.k8s.io/","specversion":"1.0","type":"io.k8s.sigs.minikube.info"}
	{"data":{"currentstep":"3","message":"Starting control plane node insufficient-storage-20210507222025-391940 in cluster insufficient-storage-20210507222025-391940","name":"Starting Node","totalsteps":"19"},"datacontenttype":"application/json","id":"5dfe4c8d-95cd-43ee-a475-c310fc4911cb","source":"https://minikube.sigs.k8s.io/","specversion":"1.0","type":"io.k8s.sigs.minikube.step"}
	{"data":{"currentstep":"5","message":"Pulling base image ...","name":"Pulling Base Image","totalsteps":"19"},"datacontenttype":"application/json","id":"18182f99-cbee-4916-8948-41d0e3a75672","source":"https://minikube.sigs.k8s.io/","specversion":"1.0","type":"io.k8s.sigs.minikube.step"}
	{"data":{"currentstep":"8","message":"Creating docker container (CPUs=2, Memory=2048MB) ...","name":"Creating Container","totalsteps":"19"},"datacontenttype":"application/json","id":"26614e49-178d-4102-9237-70de4c917dd9","source":"https://minikube.sigs.k8s.io/","specversion":"1.0","type":"io.k8s.sigs.minikube.step"}
	{"data":{"advice":"Try one or more of the following to free up space on the device:\n\t\n\t\t\t1. Run \"docker system prune\" to remove unused Docker data (optionally with \"-a\")\n\t\t\t2. Increase the storage allocated to Docker for Desktop by clicking on:\n\t\t\t\tDocker icon \u003e Preferences \u003e Resources \u003e Disk Image Size\n\t\t\t3. Run \"minikube ssh -- docker system prune\" if using the Docker container runtime","exitcode":"26","issues":"https://github.com/kubernetes/minikube/issues/9024","message":"Docker is out of disk space! (/var is at 100%% of capacity)","name":"RSRC_DOCKER_STORAGE","url":""},"datacontenttype":"application/json","id":"d9cbe3c5-0e58-45a6-acaa-0d106d8707fe","source":"https://minikube.sigs.k8s.io/","specversion":"1.0","type":"io.k8s.sigs.minikube.error"}

                                                
                                                
-- /stdout --
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p insufficient-storage-20210507222025-391940 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p insufficient-storage-20210507222025-391940 --output=json --layout=cluster: exit status 7 (275.405587ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-20210507222025-391940","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","Step":"Creating Container","StepDetail":"Creating docker container (CPUs=2, Memory=2048MB) ...","BinaryVersion":"v1.20.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-20210507222025-391940","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E0507 22:20:31.507757  530543 status.go:413] kubeconfig endpoint: extract IP: "insufficient-storage-20210507222025-391940" does not appear in /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/kubeconfig

                                                
                                                
** /stderr **
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p insufficient-storage-20210507222025-391940 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p insufficient-storage-20210507222025-391940 --output=json --layout=cluster: exit status 7 (275.476286ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-20210507222025-391940","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","BinaryVersion":"v1.20.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-20210507222025-391940","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E0507 22:20:31.784164  530602 status.go:413] kubeconfig endpoint: extract IP: "insufficient-storage-20210507222025-391940" does not appear in /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/kubeconfig
	E0507 22:20:31.794777  530602 status.go:557] unable to read event log: stat: stat /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/profiles/insufficient-storage-20210507222025-391940/events.json: no such file or directory

                                                
                                                
** /stderr **
helpers_test.go:171: Cleaning up "insufficient-storage-20210507222025-391940" profile ...
helpers_test.go:174: (dbg) Run:  out/minikube-linux-amd64 delete -p insufficient-storage-20210507222025-391940
helpers_test.go:174: (dbg) Done: out/minikube-linux-amd64 delete -p insufficient-storage-20210507222025-391940: (2.259848814s)
--- PASS: TestInsufficientStorage (8.88s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (134.17s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:119: (dbg) Run:  /tmp/minikube-v1.9.0.011766915.exe start -p running-upgrade-20210507222323-391940 --memory=2200 --vm-driver=docker  --container-runtime=containerd

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:119: (dbg) Done: /tmp/minikube-v1.9.0.011766915.exe start -p running-upgrade-20210507222323-391940 --memory=2200 --vm-driver=docker  --container-runtime=containerd: (1m30.608727762s)
version_upgrade_test.go:129: (dbg) Run:  out/minikube-linux-amd64 start -p running-upgrade-20210507222323-391940 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:129: (dbg) Done: out/minikube-linux-amd64 start -p running-upgrade-20210507222323-391940 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (40.242727636s)
helpers_test.go:171: Cleaning up "running-upgrade-20210507222323-391940" profile ...
helpers_test.go:174: (dbg) Run:  out/minikube-linux-amd64 delete -p running-upgrade-20210507222323-391940
helpers_test.go:174: (dbg) Done: out/minikube-linux-amd64 delete -p running-upgrade-20210507222323-391940: (2.952446564s)
--- PASS: TestRunningBinaryUpgrade (134.17s)

                                                
                                    
x
+
TestKubernetesUpgrade (183.61s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:227: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-20210507222034-391940 --memory=2200 --kubernetes-version=v1.14.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:227: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-20210507222034-391940 --memory=2200 --kubernetes-version=v1.14.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (1m13.727755454s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-linux-amd64 stop -p kubernetes-upgrade-20210507222034-391940
version_upgrade_test.go:232: (dbg) Done: out/minikube-linux-amd64 stop -p kubernetes-upgrade-20210507222034-391940: (1.431800366s)
version_upgrade_test.go:237: (dbg) Run:  out/minikube-linux-amd64 -p kubernetes-upgrade-20210507222034-391940 status --format={{.Host}}
version_upgrade_test.go:237: (dbg) Non-zero exit: out/minikube-linux-amd64 -p kubernetes-upgrade-20210507222034-391940 status --format={{.Host}}: exit status 7 (109.555476ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:239: status error: exit status 7 (may be ok)
version_upgrade_test.go:248: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-20210507222034-391940 --memory=2200 --kubernetes-version=v1.22.0-alpha.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
E0507 22:21:59.411227  391940 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/profiles/functional-20210507215728-391940/client.crt: no such file or directory

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:248: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-20210507222034-391940 --memory=2200 --kubernetes-version=v1.22.0-alpha.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (1m7.134351741s)
version_upgrade_test.go:253: (dbg) Run:  kubectl --context kubernetes-upgrade-20210507222034-391940 version --output=json
version_upgrade_test.go:272: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:274: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-20210507222034-391940 --memory=2200 --kubernetes-version=v1.14.0 --driver=docker  --container-runtime=containerd
version_upgrade_test.go:274: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p kubernetes-upgrade-20210507222034-391940 --memory=2200 --kubernetes-version=v1.14.0 --driver=docker  --container-runtime=containerd: exit status 106 (126.443112ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-20210507222034-391940] minikube v1.20.0 on Debian 9.13 (kvm/amd64)
	  - KUBECONFIG=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/kubeconfig
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube
	  - MINIKUBE_LOCATION=master
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.22.0-alpha.1 cluster to v1.14.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.14.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-20210507222034-391940
	    minikube start -p kubernetes-upgrade-20210507222034-391940 --kubernetes-version=v1.14.0
	    
	    2) Create a second cluster with Kubernetes 1.14.0, by running:
	    
	    minikube start -p kubernetes-upgrade-20210507222034-3919402 --kubernetes-version=v1.14.0
	    
	    3) Use the existing cluster at version Kubernetes 1.22.0-alpha.1, by running:
	    
	    minikube start -p kubernetes-upgrade-20210507222034-391940 --kubernetes-version=v1.22.0-alpha.1
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:278: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:280: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-20210507222034-391940 --memory=2200 --kubernetes-version=v1.22.0-alpha.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
E0507 22:23:17.777054  391940 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/profiles/addons-20210507215008-391940/client.crt: no such file or directory

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:280: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-20210507222034-391940 --memory=2200 --kubernetes-version=v1.22.0-alpha.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (37.082624006s)
helpers_test.go:171: Cleaning up "kubernetes-upgrade-20210507222034-391940" profile ...
helpers_test.go:174: (dbg) Run:  out/minikube-linux-amd64 delete -p kubernetes-upgrade-20210507222034-391940
helpers_test.go:174: (dbg) Done: out/minikube-linux-amd64 delete -p kubernetes-upgrade-20210507222034-391940: (3.917881691s)
--- PASS: TestKubernetesUpgrade (183.61s)

                                                
                                    
x
+
TestMissingContainerUpgrade (359.29s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
=== PAUSE TestMissingContainerUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestMissingContainerUpgrade
version_upgrade_test.go:314: (dbg) Run:  /tmp/minikube-v1.9.1.794961713.exe start -p missing-upgrade-20210507222342-391940 --memory=2200 --driver=docker  --container-runtime=containerd

                                                
                                                
=== CONT  TestMissingContainerUpgrade
version_upgrade_test.go:314: (dbg) Done: /tmp/minikube-v1.9.1.794961713.exe start -p missing-upgrade-20210507222342-391940 --memory=2200 --driver=docker  --container-runtime=containerd: (1m17.966144714s)
version_upgrade_test.go:323: (dbg) Run:  docker stop missing-upgrade-20210507222342-391940
version_upgrade_test.go:323: (dbg) Done: docker stop missing-upgrade-20210507222342-391940: (11.781696688s)
version_upgrade_test.go:328: (dbg) Run:  docker rm missing-upgrade-20210507222342-391940
version_upgrade_test.go:334: (dbg) Run:  out/minikube-linux-amd64 start -p missing-upgrade-20210507222342-391940 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd

                                                
                                                
=== CONT  TestMissingContainerUpgrade
version_upgrade_test.go:334: (dbg) Done: out/minikube-linux-amd64 start -p missing-upgrade-20210507222342-391940 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd: (4m25.9425817s)
helpers_test.go:171: Cleaning up "missing-upgrade-20210507222342-391940" profile ...
helpers_test.go:174: (dbg) Run:  out/minikube-linux-amd64 delete -p missing-upgrade-20210507222342-391940
helpers_test.go:174: (dbg) Done: out/minikube-linux-amd64 delete -p missing-upgrade-20210507222342-391940: (3.145675707s)
--- PASS: TestMissingContainerUpgrade (359.29s)

                                                
                                    
x
+
TestPause/serial/Start (167.34s)

                                                
                                                
=== RUN   TestPause/serial/Start

                                                
                                                
=== CONT  TestPause/serial/Start
pause_test.go:77: (dbg) Run:  out/minikube-linux-amd64 start -p pause-20210507222034-391940 --memory=2048 --install-addons=false --wait=all --driver=docker  --container-runtime=containerd

                                                
                                                
=== CONT  TestPause/serial/Start
pause_test.go:77: (dbg) Done: out/minikube-linux-amd64 start -p pause-20210507222034-391940 --memory=2048 --install-addons=false --wait=all --driver=docker  --container-runtime=containerd: (2m47.335059448s)
--- PASS: TestPause/serial/Start (167.34s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (17.15s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:89: (dbg) Run:  out/minikube-linux-amd64 start -p pause-20210507222034-391940 --alsologtostderr -v=1 --driver=docker  --container-runtime=containerd
--- PASS: TestPause/serial/SecondStartNoReconfiguration (17.15s)

                                                
                                    
x
+
TestPause/serial/Pause (0.67s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:107: (dbg) Run:  out/minikube-linux-amd64 pause -p pause-20210507222034-391940 --alsologtostderr -v=5
> docker-machine-driver-kvm2: 3.56 MiB / 48.57 MiB [>_______] 7.33% ? p/s ?    > docker-machine-driver-kvm2: 7.98 MiB / 48.57 MiB [->_____] 16.44% ? p/s ?    > docker-machine-driver-kvm2: 12.55 MiB / 48.57 MiB [->____] 25.83% ? p/s ?=== RUN   TestPause/serial/VerifyStatus
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p pause-20210507222034-391940 --output=json --layout=cluster
> docker-machine-driver-kvm2: 17.23 MiB / 48.57 MiB  35.48% 22.65 MiB p/s E    > docker-machine-driver-kvm2: 22.14 MiB / 48.57 MiB  45.58% 22.65 MiB p/s E    status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p pause-20210507222034-391940 --output=json --layout=cluster: exit status 2 (498.056677ms)

                                                
                                                
-- stdout --
	{"Name":"pause-20210507222034-391940","StatusCode":418,"StatusName":"Paused","Step":"Done","StepDetail":"* Paused 8 containers in: kube-system, kubernetes-dashboard, storage-gluster, istio-operator","BinaryVersion":"v1.20.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":200,"StatusName":"OK"}},"Nodes":[{"Name":"pause-20210507222034-391940","StatusCode":200,"StatusName":"OK","Components":{"apiserver":{"Name":"apiserver","StatusCode":418,"StatusName":"Paused"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
--- PASS: TestPause/serial/Pause (0.67s)

                                                
                                    
x
+
TestPause/serial/Unpause (0.79s)

                                                
                                                
=== RUN   TestPause/serial/Unpause
pause_test.go:118: (dbg) Run:  out/minikube-linux-amd64 unpause -p pause-20210507222034-391940 --alsologtostderr -v=5
> docker-machine-driver-kvm2: 27.11 MiB / 48.57 MiB  55.81% 22.65 MiB p/s E    > docker-machine-driver-kvm2: 32.94 MiB / 48.57 MiB  67.81% 22.89 MiB p/s E    > docker-machine-driver-kvm2: 38.58 MiB / 48.57 MiB  79.42% 22.89 MiB p/s E    > docker-machine-driver-kvm2: 44.97 MiB / 48.57 MiB  92.58% 22.89 MiB p/s E    > docker-machine-driver-kvm2: 48.57 MiB / 48.57 MiB  100.00% 28.35 MiB p/s === RUN   TestPause/serial/PauseAgain
pause_test.go:107: (dbg) Run:  out/minikube-linux-amd64 pause -p pause-20210507222034-391940 --alsologtostderr -v=5
> docker-machine-driver-kvm2....: 65 B / 65 B [----------] 100.00% ? p/s 0s    > docker-machine-driver-kvm2: 7.10 MiB / 48.57 MiB [->_____] 14.61% ? p/s ?    > docker-machine-driver-kvm2: 27.14 MiB / 48.57 MiB [--->__] 55.88% ? p/s ?    > docker-machine-driver-kvm2: 45.88 MiB / 48.57 MiB [----->] 94.45% ? p/s ?    > docker-machine-driver-kvm2: 48.57 MiB / 48.57 MiB  100.00% 86.09 MiB p/s --- PASS: TestKVMDriverInstallOrUpdate (4.54s)
--- PASS: TestPause/serial/Unpause (0.79s)

                                                
                                    
x
+
TestPause/serial/PauseAgain (7.51s)

                                                
                                                
=== CONT  TestPause/serial/PauseAgain
pause_test.go:107: (dbg) Done: out/minikube-linux-amd64 pause -p pause-20210507222034-391940 --alsologtostderr -v=5: (7.505324025s)
--- PASS: TestPause/serial/PauseAgain (7.51s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (0.73s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:211: (dbg) Run:  out/minikube-linux-amd64 logs -p stopped-upgrade-20210507222321-391940
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (0.73s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (132.55s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:158: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-20210507222527-391940 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.14.0

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:158: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-20210507222527-391940 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.14.0: (2m12.550892884s)
--- PASS: TestStartStop/group/old-k8s-version/serial/FirstStart (132.55s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (74.58s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:158: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-20210507222537-391940 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.22.0-alpha.1
start_stop_delete_test.go:158: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-20210507222537-391940 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.22.0-alpha.1: (1m14.58259082s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (74.58s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (9.49s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:168: (dbg) Run:  kubectl --context no-preload-20210507222537-391940 create -f testdata/busybox.yaml
start_stop_delete_test.go:168: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:335: "busybox" [fe25abc6-bcc0-464a-b2c7-b2a80fd29159] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:335: "busybox" [fe25abc6-bcc0-464a-b2c7-b2a80fd29159] Running
E0507 22:26:59.411708  391940 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/profiles/functional-20210507215728-391940/client.crt: no such file or directory
start_stop_delete_test.go:168: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 9.011127977s
start_stop_delete_test.go:168: (dbg) Run:  kubectl --context no-preload-20210507222537-391940 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (9.49s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (20.66s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:175: (dbg) Run:  out/minikube-linux-amd64 stop -p no-preload-20210507222537-391940 --alsologtostderr -v=3
start_stop_delete_test.go:175: (dbg) Done: out/minikube-linux-amd64 stop -p no-preload-20210507222537-391940 --alsologtostderr -v=3: (20.662570751s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (20.66s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.2s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-20210507222537-391940 -n no-preload-20210507222537-391940
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-20210507222537-391940 -n no-preload-20210507222537-391940: exit status 7 (97.309824ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:186: status error: exit status 7 (may be ok)
start_stop_delete_test.go:193: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p no-preload-20210507222537-391940
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.20s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (70.07s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-20210507222537-391940 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.22.0-alpha.1

                                                
                                                
=== CONT  TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:203: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-20210507222537-391940 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.22.0-alpha.1: (1m9.725013154s)
start_stop_delete_test.go:209: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-20210507222537-391940 -n no-preload-20210507222537-391940
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (70.07s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (9.59s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:168: (dbg) Run:  kubectl --context old-k8s-version-20210507222527-391940 create -f testdata/busybox.yaml
start_stop_delete_test.go:168: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:335: "busybox" [6f550da1-af83-11eb-988e-0242ee37c829] Pending
helpers_test.go:335: "busybox" [6f550da1-af83-11eb-988e-0242ee37c829] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:335: "busybox" [6f550da1-af83-11eb-988e-0242ee37c829] Running
start_stop_delete_test.go:168: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: integration-test=busybox healthy within 9.019723608s
start_stop_delete_test.go:168: (dbg) Run:  kubectl --context old-k8s-version-20210507222527-391940 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/old-k8s-version/serial/DeployApp (9.59s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (20.88s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:175: (dbg) Run:  out/minikube-linux-amd64 stop -p old-k8s-version-20210507222527-391940 --alsologtostderr -v=3
start_stop_delete_test.go:175: (dbg) Done: out/minikube-linux-amd64 stop -p old-k8s-version-20210507222527-391940 --alsologtostderr -v=3: (20.882399405s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (20.88s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.2s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-20210507222527-391940 -n old-k8s-version-20210507222527-391940
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-20210507222527-391940 -n old-k8s-version-20210507222527-391940: exit status 7 (97.724484ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:186: status error: exit status 7 (may be ok)
start_stop_delete_test.go:193: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p old-k8s-version-20210507222527-391940
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.20s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (117.98s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-20210507222527-391940 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.14.0
E0507 22:28:17.776445  391940 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/profiles/addons-20210507215008-391940/client.crt: no such file or directory

                                                
                                                
=== CONT  TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:203: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-20210507222527-391940 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=containerd --kubernetes-version=v1.14.0: (1m57.644797622s)
start_stop_delete_test.go:209: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-20210507222527-391940 -n old-k8s-version-20210507222527-391940
--- PASS: TestStartStop/group/old-k8s-version/serial/SecondStart (117.98s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (5.02s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:221: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:335: "kubernetes-dashboard-6fcdf4f6d-vc47l" [4dfc986b-a70c-4a01-9e24-4770a3a6b392] Running
start_stop_delete_test.go:221: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.012708023s
--- PASS: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (5.02s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.01s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:232: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:335: "kubernetes-dashboard-6fcdf4f6d-vc47l" [4dfc986b-a70c-4a01-9e24-4770a3a6b392] Running
start_stop_delete_test.go:232: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.007515954s
--- PASS: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.01s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.29s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:240: (dbg) Run:  out/minikube-linux-amd64 ssh -p no-preload-20210507222537-391940 "sudo crictl images -o json"
start_stop_delete_test.go:240: Found non-minikube image: kindest/kindnetd:v20210326-1e038dc5
start_stop_delete_test.go:240: Found non-minikube image: library/busybox:1.28.4-glibc
start_stop_delete_test.go:240: Found non-minikube image: library/minikube-local-cache-test:functional-20210507215728-391940
--- PASS: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.29s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (2.48s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:247: (dbg) Run:  out/minikube-linux-amd64 pause -p no-preload-20210507222537-391940 --alsologtostderr -v=1
start_stop_delete_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-20210507222537-391940 -n no-preload-20210507222537-391940
start_stop_delete_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-20210507222537-391940 -n no-preload-20210507222537-391940: exit status 2 (307.149264ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:247: status error: exit status 2 (may be ok)
start_stop_delete_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-20210507222537-391940 -n no-preload-20210507222537-391940
start_stop_delete_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-20210507222537-391940 -n no-preload-20210507222537-391940: exit status 2 (324.146903ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:247: status error: exit status 2 (may be ok)
start_stop_delete_test.go:247: (dbg) Run:  out/minikube-linux-amd64 unpause -p no-preload-20210507222537-391940 --alsologtostderr -v=1
start_stop_delete_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-20210507222537-391940 -n no-preload-20210507222537-391940
start_stop_delete_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-20210507222537-391940 -n no-preload-20210507222537-391940
--- PASS: TestStartStop/group/no-preload/serial/Pause (2.48s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (133.81s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:158: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-20210507222849-391940 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.20.2

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:158: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-20210507222849-391940 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.20.2: (2m13.811406832s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (133.81s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-different-port/serial/FirstStart (146.99s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-different-port/serial/FirstStart
start_stop_delete_test.go:158: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-different-port-20210507222942-391940 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.20.2

                                                
                                                
=== CONT  TestStartStop/group/default-k8s-different-port/serial/FirstStart
start_stop_delete_test.go:158: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-different-port-20210507222942-391940 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.20.2: (2m26.989220863s)
--- PASS: TestStartStop/group/default-k8s-different-port/serial/FirstStart (146.99s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (5.02s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:221: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:335: "kubernetes-dashboard-5d8978d65d-d6t7m" [c6790491-af83-11eb-92ed-0242c0a83a02] Running
start_stop_delete_test.go:221: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.011287143s
--- PASS: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (5.02s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.03s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:232: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:335: "kubernetes-dashboard-5d8978d65d-d6t7m" [c6790491-af83-11eb-92ed-0242c0a83a02] Running
start_stop_delete_test.go:232: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.026070321s
--- PASS: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.03s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (1.35s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:240: (dbg) Run:  out/minikube-linux-amd64 ssh -p old-k8s-version-20210507222527-391940 "sudo crictl images -o json"
start_stop_delete_test.go:240: (dbg) Done: out/minikube-linux-amd64 ssh -p old-k8s-version-20210507222527-391940 "sudo crictl images -o json": (1.345261103s)
start_stop_delete_test.go:240: Found non-minikube image: kindest/kindnetd:v20210220-5b7e6d01
start_stop_delete_test.go:240: Found non-minikube image: kindest/kindnetd:v20210326-1e038dc5
start_stop_delete_test.go:240: Found non-minikube image: library/busybox:1.28.4-glibc
start_stop_delete_test.go:240: Found non-minikube image: library/minikube-local-cache-test:functional-20210507215728-391940
--- PASS: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (1.35s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (4.6s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:247: (dbg) Run:  out/minikube-linux-amd64 pause -p old-k8s-version-20210507222527-391940 --alsologtostderr -v=1
start_stop_delete_test.go:247: (dbg) Done: out/minikube-linux-amd64 pause -p old-k8s-version-20210507222527-391940 --alsologtostderr -v=1: (2.530097172s)
start_stop_delete_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-20210507222527-391940 -n old-k8s-version-20210507222527-391940
start_stop_delete_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-20210507222527-391940 -n old-k8s-version-20210507222527-391940: exit status 2 (331.098195ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:247: status error: exit status 2 (may be ok)
start_stop_delete_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-20210507222527-391940 -n old-k8s-version-20210507222527-391940
start_stop_delete_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-20210507222527-391940 -n old-k8s-version-20210507222527-391940: exit status 2 (316.886642ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:247: status error: exit status 2 (may be ok)
start_stop_delete_test.go:247: (dbg) Run:  out/minikube-linux-amd64 unpause -p old-k8s-version-20210507222527-391940 --alsologtostderr -v=1
start_stop_delete_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-20210507222527-391940 -n old-k8s-version-20210507222527-391940
start_stop_delete_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-20210507222527-391940 -n old-k8s-version-20210507222527-391940
--- PASS: TestStartStop/group/old-k8s-version/serial/Pause (4.60s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (65.87s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:158: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-20210507223028-391940 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubelet.network-plugin=cni --extra-config=kubeadm.pod-network-cidr=192.168.111.111/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.22.0-alpha.1

                                                
                                                
=== CONT  TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:158: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-20210507223028-391940 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubelet.network-plugin=cni --extra-config=kubeadm.pod-network-cidr=192.168.111.111/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.22.0-alpha.1: (1m5.868099149s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (65.87s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (8.39s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:168: (dbg) Run:  kubectl --context embed-certs-20210507222849-391940 create -f testdata/busybox.yaml
start_stop_delete_test.go:168: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:335: "busybox" [c9532122-ce09-4f38-9c26-94f818051021] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:335: "busybox" [c9532122-ce09-4f38-9c26-94f818051021] Running
start_stop_delete_test.go:168: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 8.011548682s
start_stop_delete_test.go:168: (dbg) Run:  kubectl --context embed-certs-20210507222849-391940 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (8.39s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (20.94s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:175: (dbg) Run:  out/minikube-linux-amd64 stop -p embed-certs-20210507222849-391940 --alsologtostderr -v=3
E0507 22:31:20.822168  391940 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/profiles/addons-20210507215008-391940/client.crt: no such file or directory
start_stop_delete_test.go:175: (dbg) Done: out/minikube-linux-amd64 stop -p embed-certs-20210507222849-391940 --alsologtostderr -v=3: (20.936915412s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (20.94s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.23s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-20210507222849-391940 -n embed-certs-20210507222849-391940
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-20210507222849-391940 -n embed-certs-20210507222849-391940: exit status 7 (109.464323ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:186: status error: exit status 7 (may be ok)
start_stop_delete_test.go:193: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p embed-certs-20210507222849-391940
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.23s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (110.47s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-20210507222849-391940 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.20.2

                                                
                                                
=== CONT  TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:203: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-20210507222849-391940 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=containerd --kubernetes-version=v1.20.2: (1m50.086277809s)
start_stop_delete_test.go:209: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-20210507222849-391940 -n embed-certs-20210507222849-391940
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (110.47s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (1.36s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:175: (dbg) Run:  out/minikube-linux-amd64 stop -p newest-cni-20210507223028-391940 --alsologtostderr -v=3
start_stop_delete_test.go:175: (dbg) Done: out/minikube-linux-amd64 stop -p newest-cni-20210507223028-391940 --alsologtostderr -v=3: (1.358286653s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (1.36s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.2s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-20210507223028-391940 -n newest-cni-20210507223028-391940
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-20210507223028-391940 -n newest-cni-20210507223028-391940: exit status 7 (98.711635ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:186: status error: exit status 7 (may be ok)
start_stop_delete_test.go:193: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p newest-cni-20210507223028-391940
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.20s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (67.8s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-20210507223028-391940 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubelet.network-plugin=cni --extra-config=kubeadm.pod-network-cidr=192.168.111.111/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.22.0-alpha.1
E0507 22:31:52.730368  391940 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/profiles/no-preload-20210507222537-391940/client.crt: no such file or directory
E0507 22:31:52.735623  391940 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/profiles/no-preload-20210507222537-391940/client.crt: no such file or directory
E0507 22:31:52.745839  391940 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/profiles/no-preload-20210507222537-391940/client.crt: no such file or directory
E0507 22:31:52.766074  391940 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/profiles/no-preload-20210507222537-391940/client.crt: no such file or directory
E0507 22:31:52.806413  391940 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/profiles/no-preload-20210507222537-391940/client.crt: no such file or directory
E0507 22:31:52.886522  391940 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/profiles/no-preload-20210507222537-391940/client.crt: no such file or directory
E0507 22:31:53.046959  391940 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/profiles/no-preload-20210507222537-391940/client.crt: no such file or directory
E0507 22:31:53.367809  391940 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/profiles/no-preload-20210507222537-391940/client.crt: no such file or directory
E0507 22:31:54.008739  391940 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/profiles/no-preload-20210507222537-391940/client.crt: no such file or directory
E0507 22:31:55.289559  391940 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/profiles/no-preload-20210507222537-391940/client.crt: no such file or directory
E0507 22:31:57.850314  391940 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/profiles/no-preload-20210507222537-391940/client.crt: no such file or directory
E0507 22:31:59.411350  391940 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/profiles/functional-20210507215728-391940/client.crt: no such file or directory
E0507 22:32:02.970949  391940 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/profiles/no-preload-20210507222537-391940/client.crt: no such file or directory

                                                
                                                
=== CONT  TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:203: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-20210507223028-391940 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubelet.network-plugin=cni --extra-config=kubeadm.pod-network-cidr=192.168.111.111/16 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.22.0-alpha.1: (1m7.448229702s)
start_stop_delete_test.go:209: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-20210507223028-391940 -n newest-cni-20210507223028-391940
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (67.80s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-different-port/serial/DeployApp (9.54s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-different-port/serial/DeployApp
start_stop_delete_test.go:168: (dbg) Run:  kubectl --context default-k8s-different-port-20210507222942-391940 create -f testdata/busybox.yaml
start_stop_delete_test.go:168: (dbg) TestStartStop/group/default-k8s-different-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:335: "busybox" [842be73e-72c2-4c06-b0f7-da4ebb46b202] Pending
helpers_test.go:335: "busybox" [842be73e-72c2-4c06-b0f7-da4ebb46b202] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:335: "busybox" [842be73e-72c2-4c06-b0f7-da4ebb46b202] Running
E0507 22:32:13.211870  391940 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/profiles/no-preload-20210507222537-391940/client.crt: no such file or directory
start_stop_delete_test.go:168: (dbg) TestStartStop/group/default-k8s-different-port/serial/DeployApp: integration-test=busybox healthy within 9.014159682s
start_stop_delete_test.go:168: (dbg) Run:  kubectl --context default-k8s-different-port-20210507222942-391940 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-different-port/serial/DeployApp (9.54s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-different-port/serial/Stop (25.25s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-different-port/serial/Stop
start_stop_delete_test.go:175: (dbg) Run:  out/minikube-linux-amd64 stop -p default-k8s-different-port-20210507222942-391940 --alsologtostderr -v=3
E0507 22:32:33.692276  391940 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/profiles/no-preload-20210507222537-391940/client.crt: no such file or directory
E0507 22:32:40.848187  391940 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/profiles/old-k8s-version-20210507222527-391940/client.crt: no such file or directory
E0507 22:32:40.853432  391940 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/profiles/old-k8s-version-20210507222527-391940/client.crt: no such file or directory
E0507 22:32:40.863672  391940 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/profiles/old-k8s-version-20210507222527-391940/client.crt: no such file or directory
E0507 22:32:40.883884  391940 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/profiles/old-k8s-version-20210507222527-391940/client.crt: no such file or directory
E0507 22:32:40.924291  391940 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/profiles/old-k8s-version-20210507222527-391940/client.crt: no such file or directory
E0507 22:32:41.004569  391940 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/profiles/old-k8s-version-20210507222527-391940/client.crt: no such file or directory
E0507 22:32:41.164988  391940 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/profiles/old-k8s-version-20210507222527-391940/client.crt: no such file or directory
E0507 22:32:41.486008  391940 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/profiles/old-k8s-version-20210507222527-391940/client.crt: no such file or directory
E0507 22:32:42.126932  391940 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/profiles/old-k8s-version-20210507222527-391940/client.crt: no such file or directory
E0507 22:32:43.407665  391940 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/profiles/old-k8s-version-20210507222527-391940/client.crt: no such file or directory

                                                
                                                
=== CONT  TestStartStop/group/default-k8s-different-port/serial/Stop
start_stop_delete_test.go:175: (dbg) Done: out/minikube-linux-amd64 stop -p default-k8s-different-port-20210507222942-391940 --alsologtostderr -v=3: (25.247040585s)
--- PASS: TestStartStop/group/default-k8s-different-port/serial/Stop (25.25s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-different-port/serial/EnableAddonAfterStop (0.22s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-different-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-different-port-20210507222942-391940 -n default-k8s-different-port-20210507222942-391940
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-different-port-20210507222942-391940 -n default-k8s-different-port-20210507222942-391940: exit status 7 (103.75617ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:186: status error: exit status 7 (may be ok)
start_stop_delete_test.go:193: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p default-k8s-different-port-20210507222942-391940
--- PASS: TestStartStop/group/default-k8s-different-port/serial/EnableAddonAfterStop (0.22s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:220: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:231: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.31s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:240: (dbg) Run:  out/minikube-linux-amd64 ssh -p newest-cni-20210507223028-391940 "sudo crictl images -o json"

                                                
                                                
=== CONT  TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:240: Found non-minikube image: kindest/kindnetd:v20210326-1e038dc5
start_stop_delete_test.go:240: Found non-minikube image: library/minikube-local-cache-test:functional-20210507215728-391940
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.31s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-different-port/serial/SecondStart (114.27s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-different-port/serial/SecondStart
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-different-port-20210507222942-391940 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.20.2

                                                
                                                
=== CONT  TestStartStop/group/default-k8s-different-port/serial/SecondStart
start_stop_delete_test.go:203: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-different-port-20210507222942-391940 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=containerd --kubernetes-version=v1.20.2: (1m53.935197532s)
start_stop_delete_test.go:209: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-different-port-20210507222942-391940 -n default-k8s-different-port-20210507222942-391940
--- PASS: TestStartStop/group/default-k8s-different-port/serial/SecondStart (114.27s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (2.25s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:247: (dbg) Run:  out/minikube-linux-amd64 pause -p newest-cni-20210507223028-391940 --alsologtostderr -v=1
start_stop_delete_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-20210507223028-391940 -n newest-cni-20210507223028-391940
start_stop_delete_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-20210507223028-391940 -n newest-cni-20210507223028-391940: exit status 2 (308.942141ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:247: status error: exit status 2 (may be ok)
start_stop_delete_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-20210507223028-391940 -n newest-cni-20210507223028-391940
start_stop_delete_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-20210507223028-391940 -n newest-cni-20210507223028-391940: exit status 2 (322.207922ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:247: status error: exit status 2 (may be ok)
start_stop_delete_test.go:247: (dbg) Run:  out/minikube-linux-amd64 unpause -p newest-cni-20210507223028-391940 --alsologtostderr -v=1
start_stop_delete_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-20210507223028-391940 -n newest-cni-20210507223028-391940
E0507 22:32:45.968826  391940 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/profiles/old-k8s-version-20210507222527-391940/client.crt: no such file or directory
start_stop_delete_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-20210507223028-391940 -n newest-cni-20210507223028-391940
--- PASS: TestStartStop/group/newest-cni/serial/Pause (2.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (147.97s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:83: (dbg) Run:  out/minikube-linux-amd64 start -p auto-20210507223250-391940 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --driver=docker  --container-runtime=containerd
E0507 22:32:51.089578  391940 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/profiles/old-k8s-version-20210507222527-391940/client.crt: no such file or directory
E0507 22:33:01.330221  391940 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/profiles/old-k8s-version-20210507222527-391940/client.crt: no such file or directory
E0507 22:33:14.653202  391940 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/profiles/no-preload-20210507222537-391940/client.crt: no such file or directory
E0507 22:33:17.777338  391940 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/profiles/addons-20210507215008-391940/client.crt: no such file or directory
E0507 22:33:21.811302  391940 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/profiles/old-k8s-version-20210507222527-391940/client.crt: no such file or directory

                                                
                                                
=== CONT  TestNetworkPlugins/group/auto/Start
net_test.go:83: (dbg) Done: out/minikube-linux-amd64 start -p auto-20210507223250-391940 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --driver=docker  --container-runtime=containerd: (2m27.966265792s)
--- PASS: TestNetworkPlugins/group/auto/Start (147.97s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (5.72s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:221: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:335: "kubernetes-dashboard-968bcb79-zh9tm" [9acafcb9-0b99-4924-8254-b59f0d45eb5c] Running
start_stop_delete_test.go:221: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.713396588s
--- PASS: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (5.72s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (6.31s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:232: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:335: "kubernetes-dashboard-968bcb79-zh9tm" [9acafcb9-0b99-4924-8254-b59f0d45eb5c] Running
start_stop_delete_test.go:232: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.307015451s
--- PASS: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (6.31s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.29s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:240: (dbg) Run:  out/minikube-linux-amd64 ssh -p embed-certs-20210507222849-391940 "sudo crictl images -o json"
start_stop_delete_test.go:240: Found non-minikube image: kindest/kindnetd:v20210220-5b7e6d01
start_stop_delete_test.go:240: Found non-minikube image: kindest/kindnetd:v20210326-1e038dc5
start_stop_delete_test.go:240: Found non-minikube image: library/busybox:1.28.4-glibc
start_stop_delete_test.go:240: Found non-minikube image: library/minikube-local-cache-test:functional-20210507215728-391940
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.29s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (2.55s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:247: (dbg) Run:  out/minikube-linux-amd64 pause -p embed-certs-20210507222849-391940 --alsologtostderr -v=1
start_stop_delete_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-20210507222849-391940 -n embed-certs-20210507222849-391940
start_stop_delete_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-20210507222849-391940 -n embed-certs-20210507222849-391940: exit status 2 (320.16991ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:247: status error: exit status 2 (may be ok)
start_stop_delete_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-20210507222849-391940 -n embed-certs-20210507222849-391940
start_stop_delete_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-20210507222849-391940 -n embed-certs-20210507222849-391940: exit status 2 (319.695146ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:247: status error: exit status 2 (may be ok)
start_stop_delete_test.go:247: (dbg) Run:  out/minikube-linux-amd64 unpause -p embed-certs-20210507222849-391940 --alsologtostderr -v=1
start_stop_delete_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-20210507222849-391940 -n embed-certs-20210507222849-391940
start_stop_delete_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-20210507222849-391940 -n embed-certs-20210507222849-391940
--- PASS: TestStartStop/group/embed-certs/serial/Pause (2.55s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-different-port/serial/UserAppExistsAfterStop (5.01s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-different-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:221: (dbg) TestStartStop/group/default-k8s-different-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:335: "kubernetes-dashboard-968bcb79-89qpr" [34855ac4-b932-4d8c-8212-1ea9e04fbfd7] Running
start_stop_delete_test.go:221: (dbg) TestStartStop/group/default-k8s-different-port/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.011593109s
--- PASS: TestStartStop/group/default-k8s-different-port/serial/UserAppExistsAfterStop (5.01s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop (5.01s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:232: (dbg) TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:335: "kubernetes-dashboard-968bcb79-89qpr" [34855ac4-b932-4d8c-8212-1ea9e04fbfd7] Running
start_stop_delete_test.go:232: (dbg) TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.005545987s
--- PASS: TestStartStop/group/default-k8s-different-port/serial/AddonExistsAfterStop (5.01s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-different-port/serial/VerifyKubernetesImages (0.3s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-different-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:240: (dbg) Run:  out/minikube-linux-amd64 ssh -p default-k8s-different-port-20210507222942-391940 "sudo crictl images -o json"
start_stop_delete_test.go:240: Found non-minikube image: kindest/kindnetd:v20210220-5b7e6d01
start_stop_delete_test.go:240: Found non-minikube image: kindest/kindnetd:v20210326-1e038dc5
start_stop_delete_test.go:240: Found non-minikube image: library/busybox:1.28.4-glibc
start_stop_delete_test.go:240: Found non-minikube image: library/minikube-local-cache-test:functional-20210507215728-391940
--- PASS: TestStartStop/group/default-k8s-different-port/serial/VerifyKubernetesImages (0.30s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-different-port/serial/Pause (2.56s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-different-port/serial/Pause
start_stop_delete_test.go:247: (dbg) Run:  out/minikube-linux-amd64 pause -p default-k8s-different-port-20210507222942-391940 --alsologtostderr -v=1
start_stop_delete_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-different-port-20210507222942-391940 -n default-k8s-different-port-20210507222942-391940
start_stop_delete_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-different-port-20210507222942-391940 -n default-k8s-different-port-20210507222942-391940: exit status 2 (315.504714ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:247: status error: exit status 2 (may be ok)
start_stop_delete_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-different-port-20210507222942-391940 -n default-k8s-different-port-20210507222942-391940
start_stop_delete_test.go:247: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-different-port-20210507222942-391940 -n default-k8s-different-port-20210507222942-391940: exit status 2 (337.534428ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:247: status error: exit status 2 (may be ok)
start_stop_delete_test.go:247: (dbg) Run:  out/minikube-linux-amd64 unpause -p default-k8s-different-port-20210507222942-391940 --alsologtostderr -v=1
start_stop_delete_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-different-port-20210507222942-391940 -n default-k8s-different-port-20210507222942-391940
start_stop_delete_test.go:247: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-different-port-20210507222942-391940 -n default-k8s-different-port-20210507222942-391940
--- PASS: TestStartStop/group/default-k8s-different-port/serial/Pause (2.56s)
E0507 22:42:40.847864  391940 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/profiles/old-k8s-version-20210507222527-391940/client.crt: no such file or directory
E0507 22:42:56.716042  391940 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/profiles/cilium-20210507223455-391940/client.crt: no such file or directory
E0507 22:43:02.472583  391940 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/profiles/auto-20210507223250-391940/client.crt: no such file or directory

                                                
                                    
x
+
TestNetworkPlugins/group/cilium/Start (140.72s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium/Start
net_test.go:83: (dbg) Run:  out/minikube-linux-amd64 start -p cilium-20210507223455-391940 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=cilium --driver=docker  --container-runtime=containerd

                                                
                                                
=== CONT  TestNetworkPlugins/group/cilium/Start
net_test.go:83: (dbg) Done: out/minikube-linux-amd64 start -p cilium-20210507223455-391940 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=cilium --driver=docker  --container-runtime=containerd: (2m20.720143674s)
--- PASS: TestNetworkPlugins/group/cilium/Start (140.72s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:99: (dbg) Run:  out/minikube-linux-amd64 ssh -p auto-20210507223250-391940 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (10.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:113: (dbg) Run:  kubectl --context auto-20210507223250-391940 replace --force -f testdata/netcat-deployment.yaml
net_test.go:127: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:335: "netcat-66fbc655d5-pf5zj" [5f4d13ec-95dd-4fdf-bf14-6173d8bbb162] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:335: "netcat-66fbc655d5-pf5zj" [5f4d13ec-95dd-4fdf-bf14-6173d8bbb162] Running
E0507 22:35:24.692589  391940 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/profiles/old-k8s-version-20210507222527-391940/client.crt: no such file or directory
net_test.go:127: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 10.005608191s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (10.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (160.51s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:144: (dbg) Run:  kubectl --context auto-20210507223250-391940 exec deployment/netcat -- nslookup kubernetes.default
net_test.go:144: (dbg) Non-zero exit: kubectl --context auto-20210507223250-391940 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (58.129949ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): the server could not find the requested resource

                                                
                                                
** /stderr **
net_test.go:144: (dbg) Run:  kubectl --context auto-20210507223250-391940 exec deployment/netcat -- nslookup kubernetes.default
net_test.go:144: (dbg) Non-zero exit: kubectl --context auto-20210507223250-391940 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (55.088308ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): the server could not find the requested resource

                                                
                                                
** /stderr **
net_test.go:144: (dbg) Run:  kubectl --context auto-20210507223250-391940 exec deployment/netcat -- nslookup kubernetes.default
net_test.go:144: (dbg) Non-zero exit: kubectl --context auto-20210507223250-391940 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (55.812904ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): the server could not find the requested resource

                                                
                                                
** /stderr **

                                                
                                                
=== CONT  TestNetworkPlugins/group/auto/DNS
net_test.go:144: (dbg) Run:  kubectl --context auto-20210507223250-391940 exec deployment/netcat -- nslookup kubernetes.default
net_test.go:144: (dbg) Non-zero exit: kubectl --context auto-20210507223250-391940 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (58.640853ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): the server could not find the requested resource

                                                
                                                
** /stderr **

                                                
                                                
=== CONT  TestNetworkPlugins/group/auto/DNS
net_test.go:144: (dbg) Run:  kubectl --context auto-20210507223250-391940 exec deployment/netcat -- nslookup kubernetes.default
net_test.go:144: (dbg) Non-zero exit: kubectl --context auto-20210507223250-391940 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (61.730101ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): the server could not find the requested resource

                                                
                                                
** /stderr **
net_test.go:144: (dbg) Run:  kubectl --context auto-20210507223250-391940 exec deployment/netcat -- nslookup kubernetes.default
net_test.go:144: (dbg) Non-zero exit: kubectl --context auto-20210507223250-391940 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (55.451715ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): the server could not find the requested resource

                                                
                                                
** /stderr **
net_test.go:144: (dbg) Run:  kubectl --context auto-20210507223250-391940 exec deployment/netcat -- nslookup kubernetes.default
net_test.go:144: (dbg) Non-zero exit: kubectl --context auto-20210507223250-391940 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (72.431475ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): the server could not find the requested resource

                                                
                                                
** /stderr **
net_test.go:144: (dbg) Run:  kubectl --context auto-20210507223250-391940 exec deployment/netcat -- nslookup kubernetes.default
net_test.go:144: (dbg) Non-zero exit: kubectl --context auto-20210507223250-391940 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (69.880346ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): the server could not find the requested resource

                                                
                                                
** /stderr **
net_test.go:144: (dbg) Run:  kubectl --context auto-20210507223250-391940 exec deployment/netcat -- nslookup kubernetes.default
net_test.go:144: (dbg) Non-zero exit: kubectl --context auto-20210507223250-391940 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (56.152806ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): the server could not find the requested resource

                                                
                                                
** /stderr **
net_test.go:144: (dbg) Run:  kubectl --context auto-20210507223250-391940 exec deployment/netcat -- nslookup kubernetes.default
net_test.go:144: (dbg) Non-zero exit: kubectl --context auto-20210507223250-391940 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (56.078717ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): the server could not find the requested resource

                                                
                                                
** /stderr **
E0507 22:36:52.729824  391940 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/profiles/no-preload-20210507222537-391940/client.crt: no such file or directory
E0507 22:36:59.412134  391940 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/profiles/functional-20210507215728-391940/client.crt: no such file or directory
net_test.go:144: (dbg) Run:  kubectl --context auto-20210507223250-391940 exec deployment/netcat -- nslookup kubernetes.default
net_test.go:144: (dbg) Non-zero exit: kubectl --context auto-20210507223250-391940 exec deployment/netcat -- nslookup kubernetes.default: exit status 1 (58.003048ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): the server could not find the requested resource

                                                
                                                
** /stderr **
E0507 22:37:09.423485  391940 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/profiles/default-k8s-different-port-20210507222942-391940/client.crt: no such file or directory
E0507 22:37:09.428793  391940 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/profiles/default-k8s-different-port-20210507222942-391940/client.crt: no such file or directory
E0507 22:37:09.439870  391940 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/profiles/default-k8s-different-port-20210507222942-391940/client.crt: no such file or directory
E0507 22:37:09.460667  391940 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/profiles/default-k8s-different-port-20210507222942-391940/client.crt: no such file or directory
E0507 22:37:09.500905  391940 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/profiles/default-k8s-different-port-20210507222942-391940/client.crt: no such file or directory
E0507 22:37:09.581402  391940 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/profiles/default-k8s-different-port-20210507222942-391940/client.crt: no such file or directory
E0507 22:37:09.742161  391940 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/profiles/default-k8s-different-port-20210507222942-391940/client.crt: no such file or directory
E0507 22:37:10.062709  391940 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/profiles/default-k8s-different-port-20210507222942-391940/client.crt: no such file or directory
E0507 22:37:10.703293  391940 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/profiles/default-k8s-different-port-20210507222942-391940/client.crt: no such file or directory
E0507 22:37:11.984274  391940 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/profiles/default-k8s-different-port-20210507222942-391940/client.crt: no such file or directory
E0507 22:37:14.544400  391940 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/profiles/default-k8s-different-port-20210507222942-391940/client.crt: no such file or directory

                                                
                                                
=== CONT  TestNetworkPlugins/group/auto/DNS
net_test.go:144: (dbg) Run:  kubectl --context auto-20210507223250-391940 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (160.51s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium/ControllerPod (5.02s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium/ControllerPod
net_test.go:91: (dbg) TestNetworkPlugins/group/cilium/ControllerPod: waiting 10m0s for pods matching "k8s-app=cilium" in namespace "kube-system" ...
helpers_test.go:335: "cilium-zgw2c" [7e2485bb-7a3f-4738-bd8b-f62f21ab84dd] Running
E0507 22:37:19.665031  391940 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/profiles/default-k8s-different-port-20210507222942-391940/client.crt: no such file or directory
E0507 22:37:20.414810  391940 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/profiles/no-preload-20210507222537-391940/client.crt: no such file or directory
net_test.go:91: (dbg) TestNetworkPlugins/group/cilium/ControllerPod: k8s-app=cilium healthy within 5.012650631s
--- PASS: TestNetworkPlugins/group/cilium/ControllerPod (5.02s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium/KubeletFlags (0.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium/KubeletFlags
net_test.go:99: (dbg) Run:  out/minikube-linux-amd64 ssh -p cilium-20210507223455-391940 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/cilium/KubeletFlags (0.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium/NetCatPod (8.42s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium/NetCatPod
net_test.go:113: (dbg) Run:  kubectl --context cilium-20210507223455-391940 replace --force -f testdata/netcat-deployment.yaml
net_test.go:127: (dbg) TestNetworkPlugins/group/cilium/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:335: "netcat-66fbc655d5-2qg7s" [8e9b873b-1f8a-450d-a67e-dab0395e606a] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:335: "netcat-66fbc655d5-2qg7s" [8e9b873b-1f8a-450d-a67e-dab0395e606a] Running
net_test.go:127: (dbg) TestNetworkPlugins/group/cilium/NetCatPod: app=netcat healthy within 8.005524928s
--- PASS: TestNetworkPlugins/group/cilium/NetCatPod (8.42s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium/DNS (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium/DNS
net_test.go:144: (dbg) Run:  kubectl --context cilium-20210507223455-391940 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/cilium/DNS (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium/Localhost (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium/Localhost
net_test.go:163: (dbg) Run:  kubectl --context cilium-20210507223455-391940 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/cilium/Localhost (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium/HairPin (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium/HairPin
net_test.go:176: (dbg) Run:  kubectl --context cilium-20210507223455-391940 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
E0507 22:37:29.905319  391940 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/profiles/default-k8s-different-port-20210507222942-391940/client.crt: no such file or directory
--- PASS: TestNetworkPlugins/group/cilium/HairPin (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (145.5s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:83: (dbg) Run:  out/minikube-linux-amd64 start -p calico-20210507223733-391940 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=calico --driver=docker  --container-runtime=containerd

                                                
                                                
=== CONT  TestNetworkPlugins/group/calico/Start
net_test.go:83: (dbg) Done: out/minikube-linux-amd64 start -p calico-20210507223733-391940 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=calico --driver=docker  --container-runtime=containerd: (2m25.495958291s)
--- PASS: TestNetworkPlugins/group/calico/Start (145.50s)

                                                
                                    
x
+
TestPause/serial/VerifyStatus (0.5s)
--- PASS: TestPause/serial/VerifyStatus (0.50s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-weave/Start (152.84s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-weave/Start
net_test.go:83: (dbg) Run:  out/minikube-linux-amd64 start -p custom-weave-20210507223739-391940 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=testdata/weavenet.yaml --driver=docker  --container-runtime=containerd
E0507 22:37:40.847638  391940 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/profiles/old-k8s-version-20210507222527-391940/client.crt: no such file or directory
E0507 22:37:50.386062  391940 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/profiles/default-k8s-different-port-20210507222942-391940/client.crt: no such file or directory
E0507 22:38:08.533398  391940 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/profiles/old-k8s-version-20210507222527-391940/client.crt: no such file or directory

                                                
                                                
=== CONT  TestNetworkPlugins/group/custom-weave/Start
net_test.go:83: (dbg) Done: out/minikube-linux-amd64 start -p custom-weave-20210507223739-391940 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=testdata/weavenet.yaml --driver=docker  --container-runtime=containerd: (2m32.840035122s)
--- PASS: TestNetworkPlugins/group/custom-weave/Start (152.84s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:163: (dbg) Run:  kubectl --context auto-20210507223250-391940 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (136.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:83: (dbg) Run:  out/minikube-linux-amd64 start -p enable-default-cni-20210507223814-391940 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --enable-default-cni=true --driver=docker  --container-runtime=containerd
E0507 22:38:17.776884  391940 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/profiles/addons-20210507215008-391940/client.crt: no such file or directory
E0507 22:38:31.347016  391940 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/profiles/default-k8s-different-port-20210507222942-391940/client.crt: no such file or directory
E0507 22:39:53.267873  391940 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/profiles/default-k8s-different-port-20210507222942-391940/client.crt: no such file or directory

                                                
                                                
=== CONT  TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:83: (dbg) Done: out/minikube-linux-amd64 start -p enable-default-cni-20210507223814-391940 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --enable-default-cni=true --driver=docker  --container-runtime=containerd: (2m16.158432203s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (136.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (5.02s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:91: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:335: "calico-node-974cx" [4a5d76fa-338c-48af-b63e-5aed723b340a] Running
E0507 22:40:02.457788  391940 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/profiles/functional-20210507215728-391940/client.crt: no such file or directory
net_test.go:91: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 5.013224565s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (5.02s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:99: (dbg) Run:  out/minikube-linux-amd64 ssh -p calico-20210507223733-391940 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (9.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:113: (dbg) Run:  kubectl --context calico-20210507223733-391940 replace --force -f testdata/netcat-deployment.yaml
net_test.go:127: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:335: "netcat-66fbc655d5-7mkfm" [1f41ae41-a57c-47ce-9f53-9619f2c92a69] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:335: "netcat-66fbc655d5-7mkfm" [1f41ae41-a57c-47ce-9f53-9619f2c92a69] Running

                                                
                                                
=== CONT  TestNetworkPlugins/group/calico/NetCatPod
net_test.go:127: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 9.006166247s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (9.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-weave/KubeletFlags (0.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-weave/KubeletFlags
net_test.go:99: (dbg) Run:  out/minikube-linux-amd64 ssh -p custom-weave-20210507223739-391940 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/custom-weave/KubeletFlags (0.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-weave/NetCatPod (8.43s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-weave/NetCatPod
net_test.go:113: (dbg) Run:  kubectl --context custom-weave-20210507223739-391940 replace --force -f testdata/netcat-deployment.yaml
net_test.go:127: (dbg) TestNetworkPlugins/group/custom-weave/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:335: "netcat-66fbc655d5-7pqvk" [631065b7-2f0e-407c-ab45-42ea8e1480e3] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])

                                                
                                                
=== CONT  TestNetworkPlugins/group/custom-weave/NetCatPod
helpers_test.go:335: "netcat-66fbc655d5-7pqvk" [631065b7-2f0e-407c-ab45-42ea8e1480e3] Running

                                                
                                                
=== CONT  TestNetworkPlugins/group/custom-weave/NetCatPod
net_test.go:127: (dbg) TestNetworkPlugins/group/custom-weave/NetCatPod: app=netcat healthy within 8.005958535s
--- PASS: TestNetworkPlugins/group/custom-weave/NetCatPod (8.43s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:144: (dbg) Run:  kubectl --context calico-20210507223733-391940 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:163: (dbg) Run:  kubectl --context calico-20210507223733-391940 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:176: (dbg) Run:  kubectl --context calico-20210507223733-391940 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (122.6s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:83: (dbg) Run:  out/minikube-linux-amd64 start -p kindnet-20210507224017-391940 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=kindnet --driver=docker  --container-runtime=containerd
E0507 22:40:18.629725  391940 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/profiles/auto-20210507223250-391940/client.crt: no such file or directory
E0507 22:40:18.635217  391940 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/profiles/auto-20210507223250-391940/client.crt: no such file or directory
E0507 22:40:18.645446  391940 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/profiles/auto-20210507223250-391940/client.crt: no such file or directory
E0507 22:40:18.665610  391940 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/profiles/auto-20210507223250-391940/client.crt: no such file or directory
E0507 22:40:18.705956  391940 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/profiles/auto-20210507223250-391940/client.crt: no such file or directory
E0507 22:40:18.786084  391940 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/profiles/auto-20210507223250-391940/client.crt: no such file or directory
E0507 22:40:18.946829  391940 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/profiles/auto-20210507223250-391940/client.crt: no such file or directory
E0507 22:40:19.267563  391940 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/profiles/auto-20210507223250-391940/client.crt: no such file or directory
E0507 22:40:19.907783  391940 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/profiles/auto-20210507223250-391940/client.crt: no such file or directory

                                                
                                                
=== CONT  TestNetworkPlugins/group/kindnet/Start
net_test.go:83: (dbg) Done: out/minikube-linux-amd64 start -p kindnet-20210507224017-391940 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=kindnet --driver=docker  --container-runtime=containerd: (2m2.602931853s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (122.60s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (163.09s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:83: (dbg) Run:  out/minikube-linux-amd64 start -p bridge-20210507224024-391940 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=bridge --driver=docker  --container-runtime=containerd
E0507 22:40:28.869646  391940 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/profiles/auto-20210507223250-391940/client.crt: no such file or directory

                                                
                                                
=== CONT  TestNetworkPlugins/group/bridge/Start
net_test.go:83: (dbg) Done: out/minikube-linux-amd64 start -p bridge-20210507224024-391940 --memory=2048 --alsologtostderr --wait=true --wait-timeout=5m --cni=bridge --driver=docker  --container-runtime=containerd: (2m43.090043803s)
--- PASS: TestNetworkPlugins/group/bridge/Start (163.09s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:99: (dbg) Run:  out/minikube-linux-amd64 ssh -p enable-default-cni-20210507223814-391940 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (18.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:113: (dbg) Run:  kubectl --context enable-default-cni-20210507223814-391940 replace --force -f testdata/netcat-deployment.yaml
net_test.go:127: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:335: "netcat-66fbc655d5-k5kh6" [16b980be-8de4-4276-bb02-9b3c4da10396] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0507 22:40:39.110528  391940 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/profiles/auto-20210507223250-391940/client.crt: no such file or directory
helpers_test.go:335: "netcat-66fbc655d5-k5kh6" [16b980be-8de4-4276-bb02-9b3c4da10396] Running
net_test.go:127: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 18.005828957s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (18.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:144: (dbg) Run:  kubectl --context enable-default-cni-20210507223814-391940 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:163: (dbg) Run:  kubectl --context enable-default-cni-20210507223814-391940 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:176: (dbg) Run:  kubectl --context enable-default-cni-20210507223814-391940 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (5.02s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:91: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:335: "kindnet-q67jp" [fa4108a6-8fc0-4ba5-ba81-ea32d753a85a] Running
E0507 22:42:20.874384  391940 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/profiles/cilium-20210507223455-391940/client.crt: no such file or directory
net_test.go:91: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 5.012414354s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (5.02s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:99: (dbg) Run:  out/minikube-linux-amd64 ssh -p kindnet-20210507224017-391940 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (9.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:113: (dbg) Run:  kubectl --context kindnet-20210507224017-391940 replace --force -f testdata/netcat-deployment.yaml
net_test.go:127: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:335: "netcat-66fbc655d5-mzscd" [1142810f-fc34-41de-9571-e809f058e5dc] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0507 22:42:25.994891  391940 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/linux-amd64-docker-containerd-master-389406-c31bd57f93d45726e4bd30607374f8c720e70e95/.minikube/profiles/cilium-20210507223455-391940/client.crt: no such file or directory
helpers_test.go:335: "netcat-66fbc655d5-mzscd" [1142810f-fc34-41de-9571-e809f058e5dc] Running
net_test.go:127: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 9.004899415s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (9.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:144: (dbg) Run:  kubectl --context kindnet-20210507224017-391940 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:163: (dbg) Run:  kubectl --context kindnet-20210507224017-391940 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:176: (dbg) Run:  kubectl --context kindnet-20210507224017-391940 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:99: (dbg) Run:  out/minikube-linux-amd64 ssh -p bridge-20210507224024-391940 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (8.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:113: (dbg) Run:  kubectl --context bridge-20210507224024-391940 replace --force -f testdata/netcat-deployment.yaml
net_test.go:127: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:335: "netcat-66fbc655d5-99v59" [28914f99-60e6-4041-b4f6-790f314b7988] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:335: "netcat-66fbc655d5-99v59" [28914f99-60e6-4041-b4f6-790f314b7988] Running
net_test.go:127: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 8.006212618s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (8.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:144: (dbg) Run:  kubectl --context bridge-20210507224024-391940 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:163: (dbg) Run:  kubectl --context bridge-20210507224024-391940 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:176: (dbg) Run:  kubectl --context bridge-20210507224024-391940 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.16s)

                                                
                                    

Test skip (21/247)

x
+
TestDownloadOnly/v1.14.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.14.0/cached-images
aaa_download_only_test.go:117: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.14.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.14.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.14.0/kubectl
aaa_download_only_test.go:149: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.14.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.2/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.2/cached-images
aaa_download_only_test.go:117: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.20.2/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.2/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.2/kubectl
aaa_download_only_test.go:149: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.20.2/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.22.0-alpha.1/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.22.0-alpha.1/cached-images
aaa_download_only_test.go:117: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.22.0-alpha.1/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.22.0-alpha.1/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.22.0-alpha.1/kubectl
aaa_download_only_test.go:149: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.22.0-alpha.1/kubectl (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:465: skipping olm test till this issue is fixed https://github.com/kubernetes/minikube/issues/11311
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestDockerFlags (0s)

                                                
                                                
=== RUN   TestDockerFlags
docker_test.go:35: skipping: only runs with docker container runtime, currently testing containerd
--- SKIP: TestDockerFlags (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:116: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:189: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv
=== PAUSE TestFunctional/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv
functional_test.go:411: only validate docker env with docker container runtime, currently testing containerd
--- SKIP: TestFunctional/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:471: only validate podman env with docker container runtime, currently testing containerd
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:96: DNS forwarding is supported for darwin only now, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:96: DNS forwarding is supported for darwin only now, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:96: DNS forwarding is supported for darwin only now, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:39: Only test none driver.
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:43: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:43: skaffold requires docker-env, currently testing containerd container runtime
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.57s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:91: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:171: Cleaning up "disable-driver-mounts-20210507222941-391940" profile ...
helpers_test.go:174: (dbg) Run:  out/minikube-linux-amd64 delete -p disable-driver-mounts-20210507222941-391940
--- SKIP: TestStartStop/group/disable-driver-mounts (0.57s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel (0s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel
net_test.go:69: flannel is not yet compatible with Docker driver: iptables v1.8.3 (legacy): Couldn't load target `CNI-x': No such file or directory
--- SKIP: TestNetworkPlugins/group/flannel (0.00s)

                                                
                                    
Copied to clipboard